00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 116 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3617 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.139 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.140 The recommended git tool is: git 00:00:00.140 using credential 00000000-0000-0000-0000-000000000002 00:00:00.143 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.190 Fetching changes from the remote Git repository 00:00:00.192 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.229 Using shallow fetch with depth 1 00:00:00.229 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.229 > git --version # timeout=10 00:00:00.263 > git --version # 'git version 2.39.2' 00:00:00.263 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.283 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.283 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.866 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.877 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.889 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.889 > git config core.sparsecheckout # timeout=10 00:00:06.899 > git read-tree -mu HEAD # timeout=10 00:00:06.915 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.932 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.933 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.026 [Pipeline] Start of Pipeline 00:00:07.039 [Pipeline] library 00:00:07.041 Loading library shm_lib@master 00:00:07.041 Library shm_lib@master is cached. Copying from home. 00:00:07.054 [Pipeline] node 00:00:07.067 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.069 [Pipeline] { 00:00:07.082 [Pipeline] catchError 00:00:07.084 [Pipeline] { 00:00:07.095 [Pipeline] wrap 00:00:07.103 [Pipeline] { 00:00:07.111 [Pipeline] stage 00:00:07.113 [Pipeline] { (Prologue) 00:00:07.133 [Pipeline] echo 00:00:07.134 Node: VM-host-SM9 00:00:07.141 [Pipeline] cleanWs 00:00:07.151 [WS-CLEANUP] Deleting project workspace... 00:00:07.151 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.157 [WS-CLEANUP] done 00:00:07.386 [Pipeline] setCustomBuildProperty 00:00:07.472 [Pipeline] httpRequest 00:00:08.359 [Pipeline] echo 00:00:08.360 Sorcerer 10.211.164.101 is alive 00:00:08.368 [Pipeline] retry 00:00:08.371 [Pipeline] { 00:00:08.384 [Pipeline] httpRequest 00:00:08.389 HttpMethod: GET 00:00:08.389 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.390 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.402 Response Code: HTTP/1.1 200 OK 00:00:08.403 Success: Status code 200 is in the accepted range: 200,404 00:00:08.403 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:15.842 [Pipeline] } 00:00:15.859 [Pipeline] // retry 00:00:15.867 [Pipeline] sh 00:00:16.149 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:16.165 [Pipeline] httpRequest 00:00:16.587 [Pipeline] echo 00:00:16.589 Sorcerer 10.211.164.101 is alive 00:00:16.599 [Pipeline] retry 00:00:16.601 [Pipeline] { 00:00:16.616 [Pipeline] httpRequest 00:00:16.621 HttpMethod: GET 00:00:16.622 URL: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:16.623 Sending request to url: http://10.211.164.101/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:16.641 Response Code: HTTP/1.1 200 OK 00:00:16.641 Success: Status code 200 is in the accepted range: 200,404 00:00:16.642 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:51.431 [Pipeline] } 00:01:51.448 [Pipeline] // retry 00:01:51.456 [Pipeline] sh 00:01:51.737 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:54.283 [Pipeline] sh 00:01:54.565 + git -C spdk log --oneline -n5 00:01:54.565 b18e1bd62 version: v24.09.1-pre 00:01:54.565 19524ad45 version: v24.09 00:01:54.565 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:54.565 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:54.565 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:54.585 [Pipeline] withCredentials 00:01:54.596 > git --version # timeout=10 00:01:54.609 > git --version # 'git version 2.39.2' 00:01:54.625 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:54.628 [Pipeline] { 00:01:54.637 [Pipeline] retry 00:01:54.639 [Pipeline] { 00:01:54.656 [Pipeline] sh 00:01:54.967 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:55.015 [Pipeline] } 00:01:55.032 [Pipeline] // retry 00:01:55.038 [Pipeline] } 00:01:55.053 [Pipeline] // withCredentials 00:01:55.063 [Pipeline] httpRequest 00:01:55.462 [Pipeline] echo 00:01:55.464 Sorcerer 10.211.164.101 is alive 00:01:55.473 [Pipeline] retry 00:01:55.475 [Pipeline] { 00:01:55.491 [Pipeline] httpRequest 00:01:55.495 HttpMethod: GET 00:01:55.496 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:55.497 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:55.497 Response Code: HTTP/1.1 200 OK 00:01:55.498 Success: Status code 200 is in the accepted range: 200,404 00:01:55.498 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:02.235 [Pipeline] } 00:02:02.251 [Pipeline] // retry 00:02:02.258 [Pipeline] sh 00:02:02.537 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:04.451 [Pipeline] sh 00:02:04.730 + git -C dpdk log --oneline -n5 00:02:04.730 eeb0605f11 version: 23.11.0 00:02:04.730 238778122a doc: update release notes for 23.11 00:02:04.730 46aa6b3cfc doc: fix description of RSS features 00:02:04.730 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:04.730 7e421ae345 devtools: support skipping forbid rule check 00:02:04.747 [Pipeline] writeFile 00:02:04.761 [Pipeline] sh 00:02:05.042 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:05.054 [Pipeline] sh 00:02:05.336 + cat autorun-spdk.conf 00:02:05.336 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.336 SPDK_TEST_NVMF=1 00:02:05.336 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.336 SPDK_TEST_URING=1 00:02:05.336 SPDK_TEST_VFIOUSER=1 00:02:05.336 SPDK_TEST_USDT=1 00:02:05.336 SPDK_RUN_UBSAN=1 00:02:05.336 NET_TYPE=virt 00:02:05.336 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.336 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.336 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.343 RUN_NIGHTLY=1 00:02:05.344 [Pipeline] } 00:02:05.357 [Pipeline] // stage 00:02:05.372 [Pipeline] stage 00:02:05.374 [Pipeline] { (Run VM) 00:02:05.386 [Pipeline] sh 00:02:05.667 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:05.667 + echo 'Start stage prepare_nvme.sh' 00:02:05.667 Start stage prepare_nvme.sh 00:02:05.667 + [[ -n 1 ]] 00:02:05.667 + disk_prefix=ex1 00:02:05.667 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:05.667 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:05.667 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:05.667 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.667 ++ SPDK_TEST_NVMF=1 00:02:05.667 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.667 ++ SPDK_TEST_URING=1 00:02:05.667 ++ SPDK_TEST_VFIOUSER=1 00:02:05.667 ++ SPDK_TEST_USDT=1 00:02:05.667 ++ SPDK_RUN_UBSAN=1 00:02:05.667 ++ NET_TYPE=virt 00:02:05.667 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.667 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.667 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.667 ++ RUN_NIGHTLY=1 00:02:05.667 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:05.667 + nvme_files=() 00:02:05.667 + declare -A nvme_files 00:02:05.667 + backend_dir=/var/lib/libvirt/images/backends 00:02:05.667 + nvme_files['nvme.img']=5G 00:02:05.667 + nvme_files['nvme-cmb.img']=5G 00:02:05.667 + nvme_files['nvme-multi0.img']=4G 00:02:05.667 + nvme_files['nvme-multi1.img']=4G 00:02:05.667 + nvme_files['nvme-multi2.img']=4G 00:02:05.667 + nvme_files['nvme-openstack.img']=8G 00:02:05.667 + nvme_files['nvme-zns.img']=5G 00:02:05.667 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:05.667 + (( SPDK_TEST_FTL == 1 )) 00:02:05.667 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:05.667 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:05.667 + for nvme in "${!nvme_files[@]}" 00:02:05.667 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:05.667 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:05.667 + for nvme in "${!nvme_files[@]}" 00:02:05.667 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:05.667 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:05.667 + for nvme in "${!nvme_files[@]}" 00:02:05.667 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:05.667 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:05.667 + for nvme in "${!nvme_files[@]}" 00:02:05.667 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:05.926 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:05.926 + for nvme in "${!nvme_files[@]}" 00:02:05.926 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:05.926 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:05.926 + for nvme in "${!nvme_files[@]}" 00:02:05.926 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:05.926 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:05.926 + for nvme in "${!nvme_files[@]}" 00:02:05.926 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:06.185 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:06.185 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:06.185 + echo 'End stage prepare_nvme.sh' 00:02:06.185 End stage prepare_nvme.sh 00:02:06.197 [Pipeline] sh 00:02:06.478 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:06.478 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:02:06.478 00:02:06.478 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:06.478 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:06.478 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:06.478 HELP=0 00:02:06.478 DRY_RUN=0 00:02:06.478 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:02:06.478 NVME_DISKS_TYPE=nvme,nvme, 00:02:06.478 NVME_AUTO_CREATE=0 00:02:06.478 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:02:06.478 NVME_CMB=,, 00:02:06.478 NVME_PMR=,, 00:02:06.478 NVME_ZNS=,, 00:02:06.478 NVME_MS=,, 00:02:06.478 NVME_FDP=,, 00:02:06.478 SPDK_VAGRANT_DISTRO=fedora39 00:02:06.478 SPDK_VAGRANT_VMCPU=10 00:02:06.478 SPDK_VAGRANT_VMRAM=12288 00:02:06.478 SPDK_VAGRANT_PROVIDER=libvirt 00:02:06.478 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:06.478 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:06.478 SPDK_OPENSTACK_NETWORK=0 00:02:06.478 VAGRANT_PACKAGE_BOX=0 00:02:06.478 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:06.478 FORCE_DISTRO=true 00:02:06.478 VAGRANT_BOX_VERSION= 00:02:06.478 EXTRA_VAGRANTFILES= 00:02:06.478 NIC_MODEL=e1000 00:02:06.478 00:02:06.478 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:02:06.478 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:09.763 Bringing machine 'default' up with 'libvirt' provider... 00:02:10.021 ==> default: Creating image (snapshot of base box volume). 00:02:10.021 ==> default: Creating domain with the following settings... 00:02:10.021 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731031571_5ea239c5ef925b439589 00:02:10.021 ==> default: -- Domain type: kvm 00:02:10.021 ==> default: -- Cpus: 10 00:02:10.021 ==> default: -- Feature: acpi 00:02:10.021 ==> default: -- Feature: apic 00:02:10.021 ==> default: -- Feature: pae 00:02:10.021 ==> default: -- Memory: 12288M 00:02:10.021 ==> default: -- Memory Backing: hugepages: 00:02:10.021 ==> default: -- Management MAC: 00:02:10.021 ==> default: -- Loader: 00:02:10.021 ==> default: -- Nvram: 00:02:10.021 ==> default: -- Base box: spdk/fedora39 00:02:10.021 ==> default: -- Storage pool: default 00:02:10.021 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731031571_5ea239c5ef925b439589.img (20G) 00:02:10.021 ==> default: -- Volume Cache: default 00:02:10.021 ==> default: -- Kernel: 00:02:10.021 ==> default: -- Initrd: 00:02:10.021 ==> default: -- Graphics Type: vnc 00:02:10.021 ==> default: -- Graphics Port: -1 00:02:10.021 ==> default: -- Graphics IP: 127.0.0.1 00:02:10.021 ==> default: -- Graphics Password: Not defined 00:02:10.021 ==> default: -- Video Type: cirrus 00:02:10.022 ==> default: -- Video VRAM: 9216 00:02:10.022 ==> default: -- Sound Type: 00:02:10.022 ==> default: -- Keymap: en-us 00:02:10.022 ==> default: -- TPM Path: 00:02:10.022 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:10.022 ==> default: -- Command line args: 00:02:10.022 ==> default: -> value=-device, 00:02:10.022 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:10.022 ==> default: -> value=-drive, 00:02:10.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:02:10.022 ==> default: -> value=-device, 00:02:10.022 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.022 ==> default: -> value=-device, 00:02:10.022 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:10.022 ==> default: -> value=-drive, 00:02:10.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:10.022 ==> default: -> value=-device, 00:02:10.022 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.022 ==> default: -> value=-drive, 00:02:10.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:10.022 ==> default: -> value=-device, 00:02:10.022 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.022 ==> default: -> value=-drive, 00:02:10.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:10.022 ==> default: -> value=-device, 00:02:10.022 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.281 ==> default: Creating shared folders metadata... 00:02:10.281 ==> default: Starting domain. 00:02:11.670 ==> default: Waiting for domain to get an IP address... 00:02:29.810 ==> default: Waiting for SSH to become available... 00:02:29.810 ==> default: Configuring and enabling network interfaces... 00:02:32.339 default: SSH address: 192.168.121.212:22 00:02:32.339 default: SSH username: vagrant 00:02:32.339 default: SSH auth method: private key 00:02:34.241 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:40.798 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:47.385 ==> default: Mounting SSHFS shared folder... 00:02:47.958 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:47.958 ==> default: Checking Mount.. 00:02:49.355 ==> default: Folder Successfully Mounted! 00:02:49.355 ==> default: Running provisioner: file... 00:02:49.923 default: ~/.gitconfig => .gitconfig 00:02:50.492 00:02:50.492 SUCCESS! 00:02:50.492 00:02:50.492 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:50.492 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:50.492 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:50.492 00:02:50.501 [Pipeline] } 00:02:50.517 [Pipeline] // stage 00:02:50.526 [Pipeline] dir 00:02:50.526 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:50.528 [Pipeline] { 00:02:50.541 [Pipeline] catchError 00:02:50.543 [Pipeline] { 00:02:50.556 [Pipeline] sh 00:02:50.836 + vagrant ssh-config --host vagrant 00:02:50.836 + sed -ne /^Host/,$p 00:02:50.836 + tee ssh_conf 00:02:55.024 Host vagrant 00:02:55.024 HostName 192.168.121.212 00:02:55.024 User vagrant 00:02:55.024 Port 22 00:02:55.024 UserKnownHostsFile /dev/null 00:02:55.024 StrictHostKeyChecking no 00:02:55.024 PasswordAuthentication no 00:02:55.024 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:55.024 IdentitiesOnly yes 00:02:55.024 LogLevel FATAL 00:02:55.024 ForwardAgent yes 00:02:55.024 ForwardX11 yes 00:02:55.024 00:02:55.039 [Pipeline] withEnv 00:02:55.041 [Pipeline] { 00:02:55.055 [Pipeline] sh 00:02:55.334 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:55.334 source /etc/os-release 00:02:55.334 [[ -e /image.version ]] && img=$(< /image.version) 00:02:55.334 # Minimal, systemd-like check. 00:02:55.334 if [[ -e /.dockerenv ]]; then 00:02:55.334 # Clear garbage from the node's name: 00:02:55.334 # agt-er_autotest_547-896 -> autotest_547-896 00:02:55.334 # $HOSTNAME is the actual container id 00:02:55.334 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:55.334 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:55.334 # We can assume this is a mount from a host where container is running, 00:02:55.334 # so fetch its hostname to easily identify the target swarm worker. 00:02:55.334 container="$(< /etc/hostname) ($agent)" 00:02:55.334 else 00:02:55.334 # Fallback 00:02:55.334 container=$agent 00:02:55.334 fi 00:02:55.334 fi 00:02:55.334 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:55.334 00:02:55.604 [Pipeline] } 00:02:55.620 [Pipeline] // withEnv 00:02:55.629 [Pipeline] setCustomBuildProperty 00:02:55.645 [Pipeline] stage 00:02:55.648 [Pipeline] { (Tests) 00:02:55.665 [Pipeline] sh 00:02:55.947 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:56.220 [Pipeline] sh 00:02:56.501 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:56.775 [Pipeline] timeout 00:02:56.775 Timeout set to expire in 1 hr 0 min 00:02:56.778 [Pipeline] { 00:02:56.793 [Pipeline] sh 00:02:57.073 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:57.640 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:57.652 [Pipeline] sh 00:02:57.931 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:58.201 [Pipeline] sh 00:02:58.478 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:58.494 [Pipeline] sh 00:02:58.773 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:59.031 ++ readlink -f spdk_repo 00:02:59.031 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:59.031 + [[ -n /home/vagrant/spdk_repo ]] 00:02:59.031 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:59.031 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:59.031 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:59.031 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:59.031 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:59.031 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:59.031 + cd /home/vagrant/spdk_repo 00:02:59.031 + source /etc/os-release 00:02:59.031 ++ NAME='Fedora Linux' 00:02:59.031 ++ VERSION='39 (Cloud Edition)' 00:02:59.031 ++ ID=fedora 00:02:59.031 ++ VERSION_ID=39 00:02:59.031 ++ VERSION_CODENAME= 00:02:59.031 ++ PLATFORM_ID=platform:f39 00:02:59.031 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:59.031 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:59.031 ++ LOGO=fedora-logo-icon 00:02:59.031 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:59.031 ++ HOME_URL=https://fedoraproject.org/ 00:02:59.031 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:59.031 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:59.031 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:59.031 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:59.031 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:59.031 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:59.031 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:59.031 ++ SUPPORT_END=2024-11-12 00:02:59.031 ++ VARIANT='Cloud Edition' 00:02:59.031 ++ VARIANT_ID=cloud 00:02:59.031 + uname -a 00:02:59.031 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:59.031 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:59.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:59.290 Hugepages 00:02:59.290 node hugesize free / total 00:02:59.290 node0 1048576kB 0 / 0 00:02:59.290 node0 2048kB 0 / 0 00:02:59.290 00:02:59.290 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:59.290 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:59.549 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:59.549 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:59.549 + rm -f /tmp/spdk-ld-path 00:02:59.549 + source autorun-spdk.conf 00:02:59.549 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.549 ++ SPDK_TEST_NVMF=1 00:02:59.549 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:59.549 ++ SPDK_TEST_URING=1 00:02:59.549 ++ SPDK_TEST_VFIOUSER=1 00:02:59.549 ++ SPDK_TEST_USDT=1 00:02:59.549 ++ SPDK_RUN_UBSAN=1 00:02:59.549 ++ NET_TYPE=virt 00:02:59.549 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:59.549 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:59.549 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:59.549 ++ RUN_NIGHTLY=1 00:02:59.549 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:59.549 + [[ -n '' ]] 00:02:59.549 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:59.549 + for M in /var/spdk/build-*-manifest.txt 00:02:59.549 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:59.549 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.549 + for M in /var/spdk/build-*-manifest.txt 00:02:59.549 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:59.549 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.549 + for M in /var/spdk/build-*-manifest.txt 00:02:59.549 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:59.549 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.549 ++ uname 00:02:59.549 + [[ Linux == \L\i\n\u\x ]] 00:02:59.549 + sudo dmesg -T 00:02:59.549 + sudo dmesg --clear 00:02:59.549 + dmesg_pid=5997 00:02:59.549 + [[ Fedora Linux == FreeBSD ]] 00:02:59.549 + sudo dmesg -Tw 00:02:59.549 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:59.549 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:59.549 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:59.549 + [[ -x /usr/src/fio-static/fio ]] 00:02:59.549 + export FIO_BIN=/usr/src/fio-static/fio 00:02:59.549 + FIO_BIN=/usr/src/fio-static/fio 00:02:59.549 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:59.549 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:59.549 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:59.549 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:59.549 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:59.549 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:59.549 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:59.549 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:59.549 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:59.549 Test configuration: 00:02:59.549 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.549 SPDK_TEST_NVMF=1 00:02:59.549 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:59.549 SPDK_TEST_URING=1 00:02:59.549 SPDK_TEST_VFIOUSER=1 00:02:59.549 SPDK_TEST_USDT=1 00:02:59.549 SPDK_RUN_UBSAN=1 00:02:59.549 NET_TYPE=virt 00:02:59.549 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:59.549 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:59.549 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:59.549 RUN_NIGHTLY=1 02:07:01 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:59.549 02:07:01 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:59.549 02:07:01 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:59.549 02:07:01 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:59.549 02:07:01 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:59.549 02:07:01 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:59.549 02:07:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.549 02:07:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.549 02:07:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.549 02:07:01 -- paths/export.sh@5 -- $ export PATH 00:02:59.549 02:07:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.549 02:07:01 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:59.549 02:07:01 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:59.549 02:07:01 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731031621.XXXXXX 00:02:59.808 02:07:01 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731031621.Nrej0i 00:02:59.808 02:07:01 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:59.808 02:07:01 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:59.808 02:07:01 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:59.808 02:07:01 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:59.808 02:07:01 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:59.808 02:07:01 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:59.808 02:07:01 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:59.808 02:07:01 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:59.808 02:07:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.809 02:07:01 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:59.809 02:07:01 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:59.809 02:07:01 -- pm/common@17 -- $ local monitor 00:02:59.809 02:07:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.809 02:07:01 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.809 02:07:01 -- pm/common@25 -- $ sleep 1 00:02:59.809 02:07:01 -- pm/common@21 -- $ date +%s 00:02:59.809 02:07:01 -- pm/common@21 -- $ date +%s 00:02:59.809 02:07:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731031621 00:02:59.809 02:07:01 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731031621 00:02:59.809 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731031621_collect-cpu-load.pm.log 00:02:59.809 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731031621_collect-vmstat.pm.log 00:03:00.746 02:07:02 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:00.746 02:07:02 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:00.746 02:07:02 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:00.746 02:07:02 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:00.746 02:07:02 -- spdk/autobuild.sh@16 -- $ date -u 00:03:00.746 Fri Nov 8 02:07:02 AM UTC 2024 00:03:00.746 02:07:02 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:00.746 v24.09-rc1-9-gb18e1bd62 00:03:00.746 02:07:02 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:00.746 02:07:02 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:00.746 02:07:02 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:00.746 02:07:02 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:00.746 02:07:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:00.746 02:07:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.746 ************************************ 00:03:00.746 START TEST ubsan 00:03:00.746 ************************************ 00:03:00.746 using ubsan 00:03:00.746 02:07:02 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:00.746 00:03:00.746 real 0m0.000s 00:03:00.746 user 0m0.000s 00:03:00.746 sys 0m0.000s 00:03:00.746 02:07:02 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:00.746 02:07:02 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:00.746 ************************************ 00:03:00.746 END TEST ubsan 00:03:00.746 ************************************ 00:03:00.746 02:07:02 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:03:00.746 02:07:02 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:00.746 02:07:02 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:00.746 02:07:02 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:03:00.746 02:07:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:00.746 02:07:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.746 ************************************ 00:03:00.746 START TEST build_native_dpdk 00:03:00.746 ************************************ 00:03:00.746 02:07:02 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:00.746 eeb0605f11 version: 23.11.0 00:03:00.746 238778122a doc: update release notes for 23.11 00:03:00.746 46aa6b3cfc doc: fix description of RSS features 00:03:00.746 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:00.746 7e421ae345 devtools: support skipping forbid rule check 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:00.746 patching file config/rte_config.h 00:03:00.746 Hunk #1 succeeded at 60 (offset 1 line). 00:03:00.746 02:07:02 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:00.746 02:07:02 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:00.747 02:07:02 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:00.747 patching file lib/pcapng/rte_pcapng.c 00:03:00.747 02:07:02 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:00.747 02:07:02 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:00.747 02:07:02 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:00.747 02:07:02 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:00.747 02:07:02 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:00.747 02:07:02 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:01.005 02:07:02 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:06.275 The Meson build system 00:03:06.275 Version: 1.5.0 00:03:06.275 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:06.275 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:06.275 Build type: native build 00:03:06.275 Program cat found: YES (/usr/bin/cat) 00:03:06.275 Project name: DPDK 00:03:06.275 Project version: 23.11.0 00:03:06.275 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:06.275 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:06.275 Host machine cpu family: x86_64 00:03:06.275 Host machine cpu: x86_64 00:03:06.275 Message: ## Building in Developer Mode ## 00:03:06.275 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:06.275 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:06.275 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:06.275 Program python3 found: YES (/usr/bin/python3) 00:03:06.275 Program cat found: YES (/usr/bin/cat) 00:03:06.275 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:06.275 Compiler for C supports arguments -march=native: YES 00:03:06.275 Checking for size of "void *" : 8 00:03:06.275 Checking for size of "void *" : 8 (cached) 00:03:06.275 Library m found: YES 00:03:06.275 Library numa found: YES 00:03:06.275 Has header "numaif.h" : YES 00:03:06.275 Library fdt found: NO 00:03:06.275 Library execinfo found: NO 00:03:06.275 Has header "execinfo.h" : YES 00:03:06.275 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:06.275 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:06.275 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:06.275 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:06.275 Run-time dependency openssl found: YES 3.1.1 00:03:06.275 Run-time dependency libpcap found: YES 1.10.4 00:03:06.275 Has header "pcap.h" with dependency libpcap: YES 00:03:06.275 Compiler for C supports arguments -Wcast-qual: YES 00:03:06.275 Compiler for C supports arguments -Wdeprecated: YES 00:03:06.275 Compiler for C supports arguments -Wformat: YES 00:03:06.275 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:06.275 Compiler for C supports arguments -Wformat-security: NO 00:03:06.275 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:06.275 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:06.275 Compiler for C supports arguments -Wnested-externs: YES 00:03:06.275 Compiler for C supports arguments -Wold-style-definition: YES 00:03:06.275 Compiler for C supports arguments -Wpointer-arith: YES 00:03:06.275 Compiler for C supports arguments -Wsign-compare: YES 00:03:06.275 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:06.275 Compiler for C supports arguments -Wundef: YES 00:03:06.275 Compiler for C supports arguments -Wwrite-strings: YES 00:03:06.275 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:06.275 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:06.275 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:06.275 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:06.275 Program objdump found: YES (/usr/bin/objdump) 00:03:06.275 Compiler for C supports arguments -mavx512f: YES 00:03:06.275 Checking if "AVX512 checking" compiles: YES 00:03:06.275 Fetching value of define "__SSE4_2__" : 1 00:03:06.275 Fetching value of define "__AES__" : 1 00:03:06.275 Fetching value of define "__AVX__" : 1 00:03:06.275 Fetching value of define "__AVX2__" : 1 00:03:06.275 Fetching value of define "__AVX512BW__" : (undefined) 00:03:06.275 Fetching value of define "__AVX512CD__" : (undefined) 00:03:06.275 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:06.275 Fetching value of define "__AVX512F__" : (undefined) 00:03:06.275 Fetching value of define "__AVX512VL__" : (undefined) 00:03:06.275 Fetching value of define "__PCLMUL__" : 1 00:03:06.275 Fetching value of define "__RDRND__" : 1 00:03:06.275 Fetching value of define "__RDSEED__" : 1 00:03:06.275 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:06.275 Fetching value of define "__znver1__" : (undefined) 00:03:06.275 Fetching value of define "__znver2__" : (undefined) 00:03:06.275 Fetching value of define "__znver3__" : (undefined) 00:03:06.275 Fetching value of define "__znver4__" : (undefined) 00:03:06.275 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:06.275 Message: lib/log: Defining dependency "log" 00:03:06.275 Message: lib/kvargs: Defining dependency "kvargs" 00:03:06.275 Message: lib/telemetry: Defining dependency "telemetry" 00:03:06.275 Checking for function "getentropy" : NO 00:03:06.275 Message: lib/eal: Defining dependency "eal" 00:03:06.275 Message: lib/ring: Defining dependency "ring" 00:03:06.275 Message: lib/rcu: Defining dependency "rcu" 00:03:06.275 Message: lib/mempool: Defining dependency "mempool" 00:03:06.275 Message: lib/mbuf: Defining dependency "mbuf" 00:03:06.275 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:06.275 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:06.275 Compiler for C supports arguments -mpclmul: YES 00:03:06.275 Compiler for C supports arguments -maes: YES 00:03:06.275 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:06.275 Compiler for C supports arguments -mavx512bw: YES 00:03:06.275 Compiler for C supports arguments -mavx512dq: YES 00:03:06.275 Compiler for C supports arguments -mavx512vl: YES 00:03:06.275 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:06.275 Compiler for C supports arguments -mavx2: YES 00:03:06.275 Compiler for C supports arguments -mavx: YES 00:03:06.275 Message: lib/net: Defining dependency "net" 00:03:06.275 Message: lib/meter: Defining dependency "meter" 00:03:06.275 Message: lib/ethdev: Defining dependency "ethdev" 00:03:06.275 Message: lib/pci: Defining dependency "pci" 00:03:06.275 Message: lib/cmdline: Defining dependency "cmdline" 00:03:06.275 Message: lib/metrics: Defining dependency "metrics" 00:03:06.275 Message: lib/hash: Defining dependency "hash" 00:03:06.275 Message: lib/timer: Defining dependency "timer" 00:03:06.275 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:06.275 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:03:06.275 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:03:06.275 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:03:06.275 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:03:06.275 Message: lib/acl: Defining dependency "acl" 00:03:06.275 Message: lib/bbdev: Defining dependency "bbdev" 00:03:06.275 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:06.275 Run-time dependency libelf found: YES 0.191 00:03:06.275 Message: lib/bpf: Defining dependency "bpf" 00:03:06.275 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:06.275 Message: lib/compressdev: Defining dependency "compressdev" 00:03:06.275 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:06.275 Message: lib/distributor: Defining dependency "distributor" 00:03:06.275 Message: lib/dmadev: Defining dependency "dmadev" 00:03:06.275 Message: lib/efd: Defining dependency "efd" 00:03:06.275 Message: lib/eventdev: Defining dependency "eventdev" 00:03:06.275 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:06.275 Message: lib/gpudev: Defining dependency "gpudev" 00:03:06.275 Message: lib/gro: Defining dependency "gro" 00:03:06.275 Message: lib/gso: Defining dependency "gso" 00:03:06.275 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:06.275 Message: lib/jobstats: Defining dependency "jobstats" 00:03:06.275 Message: lib/latencystats: Defining dependency "latencystats" 00:03:06.275 Message: lib/lpm: Defining dependency "lpm" 00:03:06.275 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:06.275 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:06.275 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:06.275 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:06.275 Message: lib/member: Defining dependency "member" 00:03:06.275 Message: lib/pcapng: Defining dependency "pcapng" 00:03:06.275 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:06.275 Message: lib/power: Defining dependency "power" 00:03:06.275 Message: lib/rawdev: Defining dependency "rawdev" 00:03:06.275 Message: lib/regexdev: Defining dependency "regexdev" 00:03:06.275 Message: lib/mldev: Defining dependency "mldev" 00:03:06.275 Message: lib/rib: Defining dependency "rib" 00:03:06.275 Message: lib/reorder: Defining dependency "reorder" 00:03:06.275 Message: lib/sched: Defining dependency "sched" 00:03:06.275 Message: lib/security: Defining dependency "security" 00:03:06.275 Message: lib/stack: Defining dependency "stack" 00:03:06.275 Has header "linux/userfaultfd.h" : YES 00:03:06.275 Has header "linux/vduse.h" : YES 00:03:06.275 Message: lib/vhost: Defining dependency "vhost" 00:03:06.275 Message: lib/ipsec: Defining dependency "ipsec" 00:03:06.275 Message: lib/pdcp: Defining dependency "pdcp" 00:03:06.275 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:06.275 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:06.275 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:03:06.275 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:06.275 Message: lib/fib: Defining dependency "fib" 00:03:06.275 Message: lib/port: Defining dependency "port" 00:03:06.275 Message: lib/pdump: Defining dependency "pdump" 00:03:06.275 Message: lib/table: Defining dependency "table" 00:03:06.275 Message: lib/pipeline: Defining dependency "pipeline" 00:03:06.275 Message: lib/graph: Defining dependency "graph" 00:03:06.275 Message: lib/node: Defining dependency "node" 00:03:06.275 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:08.178 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:08.178 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:08.178 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:08.178 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:08.178 Compiler for C supports arguments -Wno-unused-value: YES 00:03:08.178 Compiler for C supports arguments -Wno-format: YES 00:03:08.178 Compiler for C supports arguments -Wno-format-security: YES 00:03:08.178 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:08.178 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:08.178 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:08.178 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:08.178 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:08.178 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:08.178 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:08.178 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:08.178 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:08.179 Has header "sys/epoll.h" : YES 00:03:08.179 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:08.179 Configuring doxy-api-html.conf using configuration 00:03:08.179 Configuring doxy-api-man.conf using configuration 00:03:08.179 Program mandb found: YES (/usr/bin/mandb) 00:03:08.179 Program sphinx-build found: NO 00:03:08.179 Configuring rte_build_config.h using configuration 00:03:08.179 Message: 00:03:08.179 ================= 00:03:08.179 Applications Enabled 00:03:08.179 ================= 00:03:08.179 00:03:08.179 apps: 00:03:08.179 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:08.179 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:08.179 test-pmd, test-regex, test-sad, test-security-perf, 00:03:08.179 00:03:08.179 Message: 00:03:08.179 ================= 00:03:08.179 Libraries Enabled 00:03:08.179 ================= 00:03:08.179 00:03:08.179 libs: 00:03:08.179 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:08.179 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:08.179 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:08.179 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:08.179 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:08.179 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:08.179 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:08.179 00:03:08.179 00:03:08.179 Message: 00:03:08.179 =============== 00:03:08.179 Drivers Enabled 00:03:08.179 =============== 00:03:08.179 00:03:08.179 common: 00:03:08.179 00:03:08.179 bus: 00:03:08.179 pci, vdev, 00:03:08.179 mempool: 00:03:08.179 ring, 00:03:08.179 dma: 00:03:08.179 00:03:08.179 net: 00:03:08.179 i40e, 00:03:08.179 raw: 00:03:08.179 00:03:08.179 crypto: 00:03:08.179 00:03:08.179 compress: 00:03:08.179 00:03:08.179 regex: 00:03:08.179 00:03:08.179 ml: 00:03:08.179 00:03:08.179 vdpa: 00:03:08.179 00:03:08.179 event: 00:03:08.179 00:03:08.179 baseband: 00:03:08.179 00:03:08.179 gpu: 00:03:08.179 00:03:08.179 00:03:08.179 Message: 00:03:08.179 ================= 00:03:08.179 Content Skipped 00:03:08.179 ================= 00:03:08.179 00:03:08.179 apps: 00:03:08.179 00:03:08.179 libs: 00:03:08.179 00:03:08.179 drivers: 00:03:08.179 common/cpt: not in enabled drivers build config 00:03:08.179 common/dpaax: not in enabled drivers build config 00:03:08.179 common/iavf: not in enabled drivers build config 00:03:08.179 common/idpf: not in enabled drivers build config 00:03:08.179 common/mvep: not in enabled drivers build config 00:03:08.179 common/octeontx: not in enabled drivers build config 00:03:08.179 bus/auxiliary: not in enabled drivers build config 00:03:08.179 bus/cdx: not in enabled drivers build config 00:03:08.179 bus/dpaa: not in enabled drivers build config 00:03:08.179 bus/fslmc: not in enabled drivers build config 00:03:08.179 bus/ifpga: not in enabled drivers build config 00:03:08.179 bus/platform: not in enabled drivers build config 00:03:08.179 bus/vmbus: not in enabled drivers build config 00:03:08.179 common/cnxk: not in enabled drivers build config 00:03:08.179 common/mlx5: not in enabled drivers build config 00:03:08.179 common/nfp: not in enabled drivers build config 00:03:08.179 common/qat: not in enabled drivers build config 00:03:08.179 common/sfc_efx: not in enabled drivers build config 00:03:08.179 mempool/bucket: not in enabled drivers build config 00:03:08.179 mempool/cnxk: not in enabled drivers build config 00:03:08.179 mempool/dpaa: not in enabled drivers build config 00:03:08.179 mempool/dpaa2: not in enabled drivers build config 00:03:08.179 mempool/octeontx: not in enabled drivers build config 00:03:08.179 mempool/stack: not in enabled drivers build config 00:03:08.179 dma/cnxk: not in enabled drivers build config 00:03:08.179 dma/dpaa: not in enabled drivers build config 00:03:08.179 dma/dpaa2: not in enabled drivers build config 00:03:08.179 dma/hisilicon: not in enabled drivers build config 00:03:08.179 dma/idxd: not in enabled drivers build config 00:03:08.179 dma/ioat: not in enabled drivers build config 00:03:08.179 dma/skeleton: not in enabled drivers build config 00:03:08.179 net/af_packet: not in enabled drivers build config 00:03:08.179 net/af_xdp: not in enabled drivers build config 00:03:08.179 net/ark: not in enabled drivers build config 00:03:08.179 net/atlantic: not in enabled drivers build config 00:03:08.179 net/avp: not in enabled drivers build config 00:03:08.179 net/axgbe: not in enabled drivers build config 00:03:08.179 net/bnx2x: not in enabled drivers build config 00:03:08.179 net/bnxt: not in enabled drivers build config 00:03:08.179 net/bonding: not in enabled drivers build config 00:03:08.179 net/cnxk: not in enabled drivers build config 00:03:08.179 net/cpfl: not in enabled drivers build config 00:03:08.179 net/cxgbe: not in enabled drivers build config 00:03:08.179 net/dpaa: not in enabled drivers build config 00:03:08.179 net/dpaa2: not in enabled drivers build config 00:03:08.179 net/e1000: not in enabled drivers build config 00:03:08.179 net/ena: not in enabled drivers build config 00:03:08.179 net/enetc: not in enabled drivers build config 00:03:08.179 net/enetfec: not in enabled drivers build config 00:03:08.179 net/enic: not in enabled drivers build config 00:03:08.179 net/failsafe: not in enabled drivers build config 00:03:08.179 net/fm10k: not in enabled drivers build config 00:03:08.179 net/gve: not in enabled drivers build config 00:03:08.179 net/hinic: not in enabled drivers build config 00:03:08.179 net/hns3: not in enabled drivers build config 00:03:08.179 net/iavf: not in enabled drivers build config 00:03:08.179 net/ice: not in enabled drivers build config 00:03:08.179 net/idpf: not in enabled drivers build config 00:03:08.179 net/igc: not in enabled drivers build config 00:03:08.179 net/ionic: not in enabled drivers build config 00:03:08.179 net/ipn3ke: not in enabled drivers build config 00:03:08.179 net/ixgbe: not in enabled drivers build config 00:03:08.179 net/mana: not in enabled drivers build config 00:03:08.179 net/memif: not in enabled drivers build config 00:03:08.179 net/mlx4: not in enabled drivers build config 00:03:08.179 net/mlx5: not in enabled drivers build config 00:03:08.179 net/mvneta: not in enabled drivers build config 00:03:08.179 net/mvpp2: not in enabled drivers build config 00:03:08.179 net/netvsc: not in enabled drivers build config 00:03:08.179 net/nfb: not in enabled drivers build config 00:03:08.179 net/nfp: not in enabled drivers build config 00:03:08.179 net/ngbe: not in enabled drivers build config 00:03:08.179 net/null: not in enabled drivers build config 00:03:08.179 net/octeontx: not in enabled drivers build config 00:03:08.179 net/octeon_ep: not in enabled drivers build config 00:03:08.179 net/pcap: not in enabled drivers build config 00:03:08.179 net/pfe: not in enabled drivers build config 00:03:08.179 net/qede: not in enabled drivers build config 00:03:08.179 net/ring: not in enabled drivers build config 00:03:08.179 net/sfc: not in enabled drivers build config 00:03:08.179 net/softnic: not in enabled drivers build config 00:03:08.179 net/tap: not in enabled drivers build config 00:03:08.179 net/thunderx: not in enabled drivers build config 00:03:08.179 net/txgbe: not in enabled drivers build config 00:03:08.179 net/vdev_netvsc: not in enabled drivers build config 00:03:08.179 net/vhost: not in enabled drivers build config 00:03:08.179 net/virtio: not in enabled drivers build config 00:03:08.179 net/vmxnet3: not in enabled drivers build config 00:03:08.179 raw/cnxk_bphy: not in enabled drivers build config 00:03:08.179 raw/cnxk_gpio: not in enabled drivers build config 00:03:08.179 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:08.179 raw/ifpga: not in enabled drivers build config 00:03:08.179 raw/ntb: not in enabled drivers build config 00:03:08.179 raw/skeleton: not in enabled drivers build config 00:03:08.179 crypto/armv8: not in enabled drivers build config 00:03:08.179 crypto/bcmfs: not in enabled drivers build config 00:03:08.179 crypto/caam_jr: not in enabled drivers build config 00:03:08.179 crypto/ccp: not in enabled drivers build config 00:03:08.179 crypto/cnxk: not in enabled drivers build config 00:03:08.179 crypto/dpaa_sec: not in enabled drivers build config 00:03:08.179 crypto/dpaa2_sec: not in enabled drivers build config 00:03:08.179 crypto/ipsec_mb: not in enabled drivers build config 00:03:08.179 crypto/mlx5: not in enabled drivers build config 00:03:08.179 crypto/mvsam: not in enabled drivers build config 00:03:08.179 crypto/nitrox: not in enabled drivers build config 00:03:08.179 crypto/null: not in enabled drivers build config 00:03:08.179 crypto/octeontx: not in enabled drivers build config 00:03:08.179 crypto/openssl: not in enabled drivers build config 00:03:08.179 crypto/scheduler: not in enabled drivers build config 00:03:08.179 crypto/uadk: not in enabled drivers build config 00:03:08.179 crypto/virtio: not in enabled drivers build config 00:03:08.179 compress/isal: not in enabled drivers build config 00:03:08.179 compress/mlx5: not in enabled drivers build config 00:03:08.179 compress/octeontx: not in enabled drivers build config 00:03:08.179 compress/zlib: not in enabled drivers build config 00:03:08.179 regex/mlx5: not in enabled drivers build config 00:03:08.179 regex/cn9k: not in enabled drivers build config 00:03:08.179 ml/cnxk: not in enabled drivers build config 00:03:08.179 vdpa/ifc: not in enabled drivers build config 00:03:08.179 vdpa/mlx5: not in enabled drivers build config 00:03:08.179 vdpa/nfp: not in enabled drivers build config 00:03:08.179 vdpa/sfc: not in enabled drivers build config 00:03:08.179 event/cnxk: not in enabled drivers build config 00:03:08.179 event/dlb2: not in enabled drivers build config 00:03:08.179 event/dpaa: not in enabled drivers build config 00:03:08.180 event/dpaa2: not in enabled drivers build config 00:03:08.180 event/dsw: not in enabled drivers build config 00:03:08.180 event/opdl: not in enabled drivers build config 00:03:08.180 event/skeleton: not in enabled drivers build config 00:03:08.180 event/sw: not in enabled drivers build config 00:03:08.180 event/octeontx: not in enabled drivers build config 00:03:08.180 baseband/acc: not in enabled drivers build config 00:03:08.180 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:08.180 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:08.180 baseband/la12xx: not in enabled drivers build config 00:03:08.180 baseband/null: not in enabled drivers build config 00:03:08.180 baseband/turbo_sw: not in enabled drivers build config 00:03:08.180 gpu/cuda: not in enabled drivers build config 00:03:08.180 00:03:08.180 00:03:08.180 Build targets in project: 220 00:03:08.180 00:03:08.180 DPDK 23.11.0 00:03:08.180 00:03:08.180 User defined options 00:03:08.180 libdir : lib 00:03:08.180 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:08.180 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:08.180 c_link_args : 00:03:08.180 enable_docs : false 00:03:08.180 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:08.180 enable_kmods : false 00:03:08.180 machine : native 00:03:08.180 tests : false 00:03:08.180 00:03:08.180 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:08.180 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:08.438 02:07:10 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:08.438 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:08.438 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:08.438 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:08.438 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:08.695 [4/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:08.695 [5/710] Linking static target lib/librte_kvargs.a 00:03:08.695 [6/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:08.695 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:08.695 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:08.695 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:08.695 [10/710] Linking static target lib/librte_log.a 00:03:08.952 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.952 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:09.211 [13/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:09.211 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:09.211 [15/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.211 [16/710] Linking target lib/librte_log.so.24.0 00:03:09.211 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:09.211 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:09.469 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:09.469 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:09.469 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:09.727 [22/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:09.727 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:09.727 [24/710] Linking target lib/librte_kvargs.so.24.0 00:03:09.727 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:09.727 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:09.727 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:09.985 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:09.985 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:09.985 [30/710] Linking static target lib/librte_telemetry.a 00:03:09.985 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:10.243 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:10.243 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:10.243 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.243 [35/710] Linking target lib/librte_telemetry.so.24.0 00:03:10.243 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:10.500 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:10.500 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:10.500 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:10.500 [40/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:10.500 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:10.500 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:10.500 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:10.500 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:10.500 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:10.759 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:10.759 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:11.017 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:11.017 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:11.017 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:11.274 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:11.274 [52/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:11.274 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:11.274 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:11.274 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:11.532 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:11.532 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:11.532 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:11.532 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:11.532 [60/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:11.532 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:11.532 [62/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:11.790 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:11.790 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:11.790 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:11.790 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:11.790 [67/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:11.790 [68/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:12.356 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:12.356 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:12.356 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:12.356 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:12.356 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:12.356 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:12.356 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:12.356 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:12.356 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:12.356 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:12.614 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:12.872 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:12.872 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:12.872 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:12.872 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:13.129 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:13.129 [85/710] Linking static target lib/librte_ring.a 00:03:13.129 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:13.129 [87/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:13.129 [88/710] Linking static target lib/librte_eal.a 00:03:13.387 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:13.387 [90/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.387 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:13.387 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:13.645 [93/710] Linking static target lib/librte_mempool.a 00:03:13.645 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:13.645 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:13.645 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:13.645 [97/710] Linking static target lib/librte_rcu.a 00:03:13.904 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:13.904 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:13.904 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:14.162 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.162 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:14.162 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.162 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:14.162 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:14.420 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:14.420 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:14.420 [108/710] Linking static target lib/librte_mbuf.a 00:03:14.420 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:14.678 [110/710] Linking static target lib/librte_net.a 00:03:14.678 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:14.678 [112/710] Linking static target lib/librte_meter.a 00:03:14.678 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.936 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:14.936 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.936 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:14.936 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:14.936 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:14.936 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.502 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:15.761 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:15.761 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:16.019 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:16.019 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:16.019 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:16.019 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:16.019 [127/710] Linking static target lib/librte_pci.a 00:03:16.019 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:16.277 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:16.277 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.277 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:16.277 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:16.277 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:16.277 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:16.536 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:16.536 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:16.536 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:16.536 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:16.536 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:16.536 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:16.806 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:16.806 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:16.806 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:16.806 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:16.806 [145/710] Linking static target lib/librte_cmdline.a 00:03:17.077 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:17.077 [147/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:17.077 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:17.077 [149/710] Linking static target lib/librte_metrics.a 00:03:17.335 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:17.593 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.853 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.853 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:17.853 [154/710] Linking static target lib/librte_timer.a 00:03:17.853 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:18.112 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.370 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:18.628 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:18.628 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:18.628 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:19.194 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:19.453 [162/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:19.453 [163/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:19.453 [164/710] Linking static target lib/librte_bitratestats.a 00:03:19.453 [165/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:19.453 [166/710] Linking static target lib/librte_ethdev.a 00:03:19.453 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:19.453 [168/710] Linking static target lib/librte_bbdev.a 00:03:19.712 [169/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.712 [170/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.712 [171/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:19.712 [172/710] Linking static target lib/librte_hash.a 00:03:19.712 [173/710] Linking target lib/librte_eal.so.24.0 00:03:19.971 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:19.971 [175/710] Linking target lib/librte_ring.so.24.0 00:03:19.971 [176/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:19.971 [177/710] Linking target lib/librte_meter.so.24.0 00:03:19.971 [178/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:19.971 [179/710] Linking target lib/librte_rcu.so.24.0 00:03:20.228 [180/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:20.228 [181/710] Linking target lib/librte_mempool.so.24.0 00:03:20.228 [182/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.228 [183/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:20.228 [184/710] Linking target lib/librte_pci.so.24.0 00:03:20.228 [185/710] Linking target lib/librte_timer.so.24.0 00:03:20.228 [186/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:20.228 [187/710] Linking static target lib/acl/libavx2_tmp.a 00:03:20.228 [188/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:20.228 [189/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:20.228 [190/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.229 [191/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:20.229 [192/710] Linking target lib/librte_mbuf.so.24.0 00:03:20.229 [193/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:20.486 [194/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:20.486 [195/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:20.486 [196/710] Linking static target lib/acl/libavx512_tmp.a 00:03:20.486 [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:20.486 [198/710] Linking target lib/librte_net.so.24.0 00:03:20.743 [199/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:20.743 [200/710] Linking static target lib/librte_acl.a 00:03:20.743 [201/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:20.743 [202/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:20.743 [203/710] Linking target lib/librte_cmdline.so.24.0 00:03:20.743 [204/710] Linking target lib/librte_hash.so.24.0 00:03:20.743 [205/710] Linking target lib/librte_bbdev.so.24.0 00:03:20.743 [206/710] Linking static target lib/librte_cfgfile.a 00:03:20.743 [207/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:21.001 [208/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:21.001 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.001 [210/710] Linking target lib/librte_acl.so.24.0 00:03:21.001 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:21.001 [212/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.259 [213/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:21.259 [214/710] Linking target lib/librte_cfgfile.so.24.0 00:03:21.259 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:21.259 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:21.517 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:21.517 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:21.775 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:21.775 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:21.775 [221/710] Linking static target lib/librte_bpf.a 00:03:21.775 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:21.775 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:21.775 [224/710] Linking static target lib/librte_compressdev.a 00:03:22.033 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:22.033 [226/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.291 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:22.291 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:22.291 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:22.291 [230/710] Linking static target lib/librte_distributor.a 00:03:22.291 [231/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.549 [232/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:22.549 [233/710] Linking target lib/librte_compressdev.so.24.0 00:03:22.549 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.549 [235/710] Linking target lib/librte_distributor.so.24.0 00:03:22.808 [236/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:22.808 [237/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:22.808 [238/710] Linking static target lib/librte_dmadev.a 00:03:23.066 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.066 [240/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:23.066 [241/710] Linking target lib/librte_dmadev.so.24.0 00:03:23.325 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:23.325 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:23.583 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:23.583 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:23.583 [246/710] Linking static target lib/librte_efd.a 00:03:23.842 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:23.842 [248/710] Linking static target lib/librte_cryptodev.a 00:03:23.842 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:23.842 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.100 [251/710] Linking target lib/librte_efd.so.24.0 00:03:24.358 [252/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:24.358 [253/710] Linking static target lib/librte_dispatcher.a 00:03:24.358 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:24.358 [255/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.616 [256/710] Linking target lib/librte_ethdev.so.24.0 00:03:24.616 [257/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:24.616 [258/710] Linking static target lib/librte_gpudev.a 00:03:24.616 [259/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:24.616 [260/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:24.616 [261/710] Linking target lib/librte_metrics.so.24.0 00:03:24.616 [262/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.616 [263/710] Linking target lib/librte_bpf.so.24.0 00:03:24.874 [264/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:24.874 [265/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:24.874 [266/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:24.874 [267/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:24.874 [268/710] Linking target lib/librte_bitratestats.so.24.0 00:03:25.133 [269/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.133 [270/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:25.133 [271/710] Linking target lib/librte_cryptodev.so.24.0 00:03:25.133 [272/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:25.391 [273/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:25.391 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.391 [275/710] Linking target lib/librte_gpudev.so.24.0 00:03:25.650 [276/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:25.650 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:25.650 [278/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:25.650 [279/710] Linking static target lib/librte_eventdev.a 00:03:25.650 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:25.650 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:25.650 [282/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:25.650 [283/710] Linking static target lib/librte_gro.a 00:03:25.906 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:25.906 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:25.906 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:25.906 [287/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.906 [288/710] Linking target lib/librte_gro.so.24.0 00:03:26.164 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:26.164 [290/710] Linking static target lib/librte_gso.a 00:03:26.422 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:26.422 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.422 [293/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:26.422 [294/710] Linking target lib/librte_gso.so.24.0 00:03:26.423 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:26.423 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:26.680 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:26.680 [298/710] Linking static target lib/librte_jobstats.a 00:03:26.680 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:26.680 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:26.938 [301/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:26.938 [302/710] Linking static target lib/librte_ip_frag.a 00:03:26.938 [303/710] Linking static target lib/librte_latencystats.a 00:03:26.938 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.938 [305/710] Linking target lib/librte_jobstats.so.24.0 00:03:26.938 [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.196 [307/710] Linking target lib/librte_latencystats.so.24.0 00:03:27.196 [308/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.196 [309/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:27.196 [310/710] Linking target lib/librte_ip_frag.so.24.0 00:03:27.196 [311/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:27.196 [312/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:27.196 [313/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:27.196 [314/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:27.196 [315/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:27.455 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:27.455 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:27.713 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.713 [319/710] Linking target lib/librte_eventdev.so.24.0 00:03:27.713 [320/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:27.973 [321/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:27.973 [322/710] Linking static target lib/librte_lpm.a 00:03:27.973 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:27.973 [324/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:27.973 [325/710] Linking target lib/librte_dispatcher.so.24.0 00:03:27.973 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:27.973 [327/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:27.973 [328/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:28.232 [329/710] Linking static target lib/librte_pcapng.a 00:03:28.232 [330/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:28.232 [331/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.232 [332/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:28.232 [333/710] Linking target lib/librte_lpm.so.24.0 00:03:28.232 [334/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.489 [335/710] Linking target lib/librte_pcapng.so.24.0 00:03:28.489 [336/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:28.489 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:28.489 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:28.489 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:28.748 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:28.748 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:28.748 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:28.749 [343/710] Linking static target lib/librte_power.a 00:03:29.008 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:29.008 [345/710] Linking static target lib/librte_regexdev.a 00:03:29.008 [346/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:29.008 [347/710] Linking static target lib/librte_rawdev.a 00:03:29.008 [348/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:29.008 [349/710] Linking static target lib/librte_member.a 00:03:29.008 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:29.274 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:29.274 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:29.274 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.274 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:29.532 [355/710] Linking static target lib/librte_mldev.a 00:03:29.532 [356/710] Linking target lib/librte_member.so.24.0 00:03:29.532 [357/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.532 [358/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.532 [359/710] Linking target lib/librte_rawdev.so.24.0 00:03:29.532 [360/710] Linking target lib/librte_power.so.24.0 00:03:29.532 [361/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:29.532 [362/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:29.805 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.805 [364/710] Linking target lib/librte_regexdev.so.24.0 00:03:29.805 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:30.114 [366/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:30.114 [367/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:30.114 [368/710] Linking static target lib/librte_reorder.a 00:03:30.114 [369/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:30.114 [370/710] Linking static target lib/librte_rib.a 00:03:30.114 [371/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:30.114 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:30.114 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:30.386 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:30.386 [375/710] Linking static target lib/librte_stack.a 00:03:30.386 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.386 [377/710] Linking target lib/librte_reorder.so.24.0 00:03:30.386 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:30.386 [379/710] Linking static target lib/librte_security.a 00:03:30.645 [380/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.645 [381/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:30.645 [382/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.645 [383/710] Linking target lib/librte_stack.so.24.0 00:03:30.645 [384/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.645 [385/710] Linking target lib/librte_rib.so.24.0 00:03:30.645 [386/710] Linking target lib/librte_mldev.so.24.0 00:03:30.645 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:30.903 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.903 [389/710] Linking target lib/librte_security.so.24.0 00:03:30.903 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:30.903 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:30.903 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:31.161 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:31.161 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:31.161 [395/710] Linking static target lib/librte_sched.a 00:03:31.729 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:31.729 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.729 [398/710] Linking target lib/librte_sched.so.24.0 00:03:31.729 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:31.729 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:31.729 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:31.987 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:32.246 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:32.246 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:32.504 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:32.504 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:32.763 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:32.763 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:32.763 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:32.763 [410/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:33.021 [411/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:33.021 [412/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:33.021 [413/710] Linking static target lib/librte_ipsec.a 00:03:33.280 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:33.280 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:33.280 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.538 [417/710] Linking target lib/librte_ipsec.so.24.0 00:03:33.538 [418/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:33.538 [419/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:33.538 [420/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:33.538 [421/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:33.538 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:33.538 [423/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:34.473 [424/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:34.473 [425/710] Linking static target lib/librte_pdcp.a 00:03:34.473 [426/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:34.473 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:34.473 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:34.473 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:34.473 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:34.473 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:34.732 [432/710] Linking static target lib/librte_fib.a 00:03:34.732 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.732 [434/710] Linking target lib/librte_pdcp.so.24.0 00:03:34.990 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.990 [436/710] Linking target lib/librte_fib.so.24.0 00:03:34.990 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:35.557 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:35.557 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:35.557 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:35.815 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:35.815 [442/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:35.815 [443/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:36.074 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:36.074 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:36.332 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:36.332 [447/710] Linking static target lib/librte_port.a 00:03:36.332 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:36.591 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:36.591 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:36.591 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:36.849 [452/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:36.849 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:36.849 [454/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.849 [455/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:36.849 [456/710] Linking static target lib/librte_pdump.a 00:03:36.849 [457/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:36.849 [458/710] Linking target lib/librte_port.so.24.0 00:03:37.108 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:37.108 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.108 [461/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:37.108 [462/710] Linking target lib/librte_pdump.so.24.0 00:03:37.675 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:37.675 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:37.675 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:37.675 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:37.675 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:37.675 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:38.242 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:38.242 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:38.242 [471/710] Linking static target lib/librte_table.a 00:03:38.242 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:38.242 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:38.809 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.809 [475/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:38.809 [476/710] Linking target lib/librte_table.so.24.0 00:03:39.069 [477/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:39.069 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:39.069 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:39.327 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:39.586 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:39.586 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:39.844 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:39.844 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:39.844 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:39.844 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:40.412 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:40.412 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:40.412 [489/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:40.412 [490/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:40.670 [491/710] Linking static target lib/librte_graph.a 00:03:40.670 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:40.670 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:41.237 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.237 [495/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:41.237 [496/710] Linking target lib/librte_graph.so.24.0 00:03:41.237 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:41.237 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:41.237 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:41.496 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:41.754 [501/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:41.754 [502/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:41.754 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:41.754 [504/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:42.013 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:42.013 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:42.271 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:42.271 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:42.530 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:42.530 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:42.530 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:42.530 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:42.789 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:42.789 [514/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:42.789 [515/710] Linking static target lib/librte_node.a 00:03:43.047 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.047 [517/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:43.047 [518/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:43.047 [519/710] Linking target lib/librte_node.so.24.0 00:03:43.047 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:43.047 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:43.306 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:43.306 [523/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:43.306 [524/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:43.306 [525/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:43.306 [526/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:43.306 [527/710] Linking static target drivers/librte_bus_vdev.a 00:03:43.306 [528/710] Linking static target drivers/librte_bus_pci.a 00:03:43.565 [529/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:43.565 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:43.565 [531/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.824 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:43.824 [533/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:43.824 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:43.824 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:43.824 [536/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.824 [537/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:43.824 [538/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:43.824 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:44.082 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:44.082 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:44.082 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:44.082 [543/710] Linking static target drivers/librte_mempool_ring.a 00:03:44.082 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:44.082 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:44.341 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:44.600 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:44.858 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:45.117 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:45.117 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:45.117 [551/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:46.051 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:46.051 [553/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:46.051 [554/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:46.051 [555/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:46.051 [556/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:46.051 [557/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:46.618 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:46.618 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:46.877 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:46.877 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:46.877 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:47.442 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:47.442 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:47.701 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:47.701 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:47.958 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:48.216 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:48.216 [569/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:48.216 [570/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:48.475 [571/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:48.475 [572/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:48.475 [573/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:48.475 [574/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:48.475 [575/710] Linking static target lib/librte_vhost.a 00:03:48.733 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:48.992 [577/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:48.992 [578/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:48.992 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:48.992 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:49.250 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:49.250 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:49.520 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:49.521 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:49.521 [585/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:49.521 [586/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.521 [587/710] Linking static target drivers/librte_net_i40e.a 00:03:49.521 [588/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:49.521 [589/710] Linking target lib/librte_vhost.so.24.0 00:03:49.521 [590/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:49.822 [591/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:49.822 [592/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:49.822 [593/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:49.822 [594/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:50.080 [595/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.339 [596/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:50.339 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:50.339 [598/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:50.599 [599/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:50.857 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:51.116 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:51.116 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:51.116 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:51.116 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:51.116 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:51.374 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:51.374 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:51.632 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:51.890 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:51.890 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:51.890 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:52.149 [612/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:52.149 [613/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:52.149 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:52.149 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:52.149 [616/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:52.407 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:52.666 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:52.666 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:52.924 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:53.182 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:53.182 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:53.182 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:54.118 [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:54.118 [625/710] Linking static target lib/librte_pipeline.a 00:03:54.118 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:54.118 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:54.118 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:54.118 [629/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:54.376 [630/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:54.376 [631/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:54.376 [632/710] Linking target app/dpdk-dumpcap 00:03:54.376 [633/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:54.376 [634/710] Linking target app/dpdk-graph 00:03:54.634 [635/710] Linking target app/dpdk-pdump 00:03:54.634 [636/710] Linking target app/dpdk-proc-info 00:03:54.892 [637/710] Linking target app/dpdk-test-acl 00:03:54.892 [638/710] Linking target app/dpdk-test-cmdline 00:03:54.892 [639/710] Linking target app/dpdk-test-compress-perf 00:03:54.892 [640/710] Linking target app/dpdk-test-crypto-perf 00:03:54.892 [641/710] Linking target app/dpdk-test-dma-perf 00:03:55.150 [642/710] Linking target app/dpdk-test-fib 00:03:55.150 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:55.409 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:55.409 [645/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:55.409 [646/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:55.409 [647/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:55.667 [648/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:55.667 [649/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:55.667 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:55.926 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:55.926 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:56.184 [653/710] Linking target app/dpdk-test-gpudev 00:03:56.184 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:56.184 [655/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:56.184 [656/710] Linking target app/dpdk-test-eventdev 00:03:56.184 [657/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:56.443 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:56.702 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:56.702 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:56.702 [661/710] Linking target app/dpdk-test-flow-perf 00:03:56.702 [662/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:56.702 [663/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.702 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:56.960 [665/710] Linking target lib/librte_pipeline.so.24.0 00:03:56.960 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:56.961 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:57.219 [668/710] Linking target app/dpdk-test-bbdev 00:03:57.219 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:57.477 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:57.477 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:57.477 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:57.477 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:57.735 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:57.735 [675/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:57.993 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:57.993 [677/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:58.250 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:58.250 [679/710] Linking target app/dpdk-test-mldev 00:03:58.509 [680/710] Linking target app/dpdk-test-pipeline 00:03:58.509 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:58.509 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:58.767 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:59.025 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:59.025 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:59.025 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:59.284 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:59.284 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:59.541 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:59.541 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:59.799 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:59.799 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:59.799 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:04:00.365 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:04:00.623 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:04:00.623 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:04:00.882 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:04:00.882 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:04:00.882 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:04:01.140 [700/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:04:01.140 [701/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:04:01.398 [702/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:04:01.398 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:04:01.398 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:04:01.398 [705/710] Linking target app/dpdk-test-regex 00:04:01.656 [706/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:04:01.656 [707/710] Linking target app/dpdk-test-sad 00:04:01.915 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:04:02.173 [709/710] Linking target app/dpdk-testpmd 00:04:02.432 [710/710] Linking target app/dpdk-test-security-perf 00:04:02.432 02:08:04 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:04:02.432 02:08:04 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:04:02.432 02:08:04 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:04:02.432 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:04:02.432 [0/1] Installing files. 00:04:03.002 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:03.002 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.003 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:03.004 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.005 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:03.006 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:03.007 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:03.007 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.007 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.008 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.578 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.578 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.578 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.578 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:03.578 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.578 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:03.578 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.578 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:03.578 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.578 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:03.578 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.578 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.579 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.580 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:03.581 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:03.581 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:04:03.581 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:04:03.581 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:04:03.581 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:04:03.581 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:04:03.581 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:04:03.581 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:04:03.581 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:04:03.581 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:04:03.581 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:04:03.581 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:04:03.581 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:04:03.581 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:04:03.581 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:04:03.581 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:04:03.581 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:04:03.581 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:04:03.581 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:04:03.581 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:04:03.581 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:04:03.581 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:04:03.581 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:04:03.581 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:04:03.581 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:04:03.581 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:04:03.581 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:04:03.581 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:04:03.581 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:04:03.581 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:04:03.581 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:04:03.581 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:04:03.581 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:04:03.581 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:04:03.581 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:04:03.581 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:04:03.581 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:04:03.581 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:04:03.581 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:04:03.581 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:04:03.581 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:04:03.581 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:04:03.581 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:04:03.581 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:04:03.582 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:04:03.582 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:04:03.582 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:04:03.582 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:04:03.582 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:04:03.582 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:04:03.582 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:04:03.582 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:04:03.582 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:04:03.582 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:04:03.582 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:04:03.582 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:04:03.582 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:04:03.582 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:04:03.582 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:04:03.582 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:04:03.582 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:04:03.582 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:04:03.582 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:04:03.582 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:04:03.582 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:04:03.582 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:04:03.582 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:04:03.582 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:04:03.582 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:04:03.582 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:04:03.582 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:04:03.582 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:04:03.582 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:04:03.582 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:04:03.582 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:04:03.582 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:04:03.582 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:04:03.582 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:04:03.582 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:04:03.582 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:04:03.582 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:04:03.582 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:04:03.582 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:04:03.582 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:04:03.582 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:04:03.582 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:04:03.582 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:04:03.582 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:04:03.582 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:04:03.582 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:04:03.582 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:04:03.582 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:04:03.582 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:04:03.582 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:04:03.582 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:04:03.582 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:04:03.582 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:04:03.582 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:04:03.582 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:04:03.582 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:04:03.582 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:04:03.582 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:04:03.582 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:04:03.582 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:04:03.582 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:04:03.582 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:04:03.582 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:04:03.582 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:04:03.582 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:04:03.582 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:04:03.582 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:04:03.582 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:04:03.582 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:04:03.582 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:04:03.582 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:04:03.582 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:04:03.582 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:04:03.582 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:04:03.582 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:04:03.582 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:04:03.582 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:04:03.582 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:04:03.582 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:04:03.582 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:04:03.582 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:04:03.582 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:04:03.582 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:03.582 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:04:03.582 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:03.582 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:04:03.582 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:03.582 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:04:03.582 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:03.582 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:04:03.582 02:08:05 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:04:03.582 02:08:05 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:03.582 00:04:03.582 real 1m2.779s 00:04:03.583 user 7m40.694s 00:04:03.583 sys 1m5.355s 00:04:03.583 02:08:05 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:03.583 02:08:05 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:04:03.583 ************************************ 00:04:03.583 END TEST build_native_dpdk 00:04:03.583 ************************************ 00:04:03.583 02:08:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:03.583 02:08:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:03.583 02:08:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:03.583 02:08:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:03.583 02:08:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:03.583 02:08:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:03.583 02:08:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:03.583 02:08:05 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:04:03.840 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:04:03.840 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:04:03.840 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:04:03.840 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:04.408 Using 'verbs' RDMA provider 00:04:17.549 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:32.492 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:32.492 Creating mk/config.mk...done. 00:04:32.492 Creating mk/cc.flags.mk...done. 00:04:32.492 Type 'make' to build. 00:04:32.492 02:08:32 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:32.492 02:08:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:32.492 02:08:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:32.492 02:08:32 -- common/autotest_common.sh@10 -- $ set +x 00:04:32.492 ************************************ 00:04:32.492 START TEST make 00:04:32.492 ************************************ 00:04:32.492 02:08:32 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:32.492 make[1]: Nothing to be done for 'all'. 00:04:32.750 The Meson build system 00:04:32.750 Version: 1.5.0 00:04:32.750 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:04:32.750 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:32.750 Build type: native build 00:04:32.750 Project name: libvfio-user 00:04:32.750 Project version: 0.0.1 00:04:32.750 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:32.750 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:32.750 Host machine cpu family: x86_64 00:04:32.750 Host machine cpu: x86_64 00:04:32.750 Run-time dependency threads found: YES 00:04:32.750 Library dl found: YES 00:04:32.750 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:32.750 Run-time dependency json-c found: YES 0.17 00:04:32.750 Run-time dependency cmocka found: YES 1.1.7 00:04:32.750 Program pytest-3 found: NO 00:04:32.750 Program flake8 found: NO 00:04:32.750 Program misspell-fixer found: NO 00:04:32.750 Program restructuredtext-lint found: NO 00:04:32.750 Program valgrind found: YES (/usr/bin/valgrind) 00:04:32.750 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:32.750 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:32.750 Compiler for C supports arguments -Wwrite-strings: YES 00:04:32.750 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:32.750 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:04:32.750 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:04:32.750 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:32.750 Build targets in project: 8 00:04:32.750 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:32.750 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:32.750 00:04:32.750 libvfio-user 0.0.1 00:04:32.750 00:04:32.750 User defined options 00:04:32.750 buildtype : debug 00:04:32.750 default_library: shared 00:04:32.750 libdir : /usr/local/lib 00:04:32.750 00:04:32.750 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:33.316 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:33.316 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:33.316 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:33.316 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:33.316 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:33.316 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:33.316 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:33.316 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:33.316 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:33.316 [9/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:33.316 [10/37] Compiling C object samples/null.p/null.c.o 00:04:33.316 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:33.573 [12/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:33.573 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:33.573 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:33.573 [15/37] Compiling C object samples/client.p/client.c.o 00:04:33.573 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:33.573 [17/37] Compiling C object samples/server.p/server.c.o 00:04:33.573 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:33.573 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:33.573 [20/37] Linking target samples/client 00:04:33.573 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:33.573 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:33.573 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:33.573 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:33.573 [25/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:33.573 [26/37] Linking target lib/libvfio-user.so.0.0.1 00:04:33.573 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:33.573 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:33.831 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:33.831 [30/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:33.831 [31/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:33.831 [32/37] Linking target samples/server 00:04:33.831 [33/37] Linking target samples/null 00:04:33.831 [34/37] Linking target samples/lspci 00:04:33.831 [35/37] Linking target samples/gpio-pci-idio-16 00:04:33.831 [36/37] Linking target samples/shadow_ioeventfd_server 00:04:33.831 [37/37] Linking target test/unit_tests 00:04:33.831 INFO: autodetecting backend as ninja 00:04:33.831 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:33.831 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:34.398 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:34.398 ninja: no work to do. 00:05:30.617 CC lib/ut_mock/mock.o 00:05:30.617 CC lib/ut/ut.o 00:05:30.617 CC lib/log/log_flags.o 00:05:30.617 CC lib/log/log.o 00:05:30.617 CC lib/log/log_deprecated.o 00:05:30.617 LIB libspdk_ut_mock.a 00:05:30.617 LIB libspdk_ut.a 00:05:30.617 LIB libspdk_log.a 00:05:30.617 SO libspdk_ut_mock.so.6.0 00:05:30.617 SO libspdk_ut.so.2.0 00:05:30.617 SO libspdk_log.so.7.0 00:05:30.617 SYMLINK libspdk_ut_mock.so 00:05:30.617 SYMLINK libspdk_ut.so 00:05:30.617 SYMLINK libspdk_log.so 00:05:30.617 CC lib/util/base64.o 00:05:30.617 CC lib/util/cpuset.o 00:05:30.617 CC lib/util/bit_array.o 00:05:30.617 CC lib/util/crc16.o 00:05:30.617 CC lib/util/crc32c.o 00:05:30.617 CC lib/util/crc32.o 00:05:30.617 CC lib/ioat/ioat.o 00:05:30.617 CC lib/dma/dma.o 00:05:30.617 CXX lib/trace_parser/trace.o 00:05:30.617 CC lib/vfio_user/host/vfio_user_pci.o 00:05:30.617 CC lib/util/crc32_ieee.o 00:05:30.617 CC lib/vfio_user/host/vfio_user.o 00:05:30.617 CC lib/util/crc64.o 00:05:30.617 CC lib/util/dif.o 00:05:30.617 LIB libspdk_dma.a 00:05:30.617 CC lib/util/fd.o 00:05:30.617 CC lib/util/fd_group.o 00:05:30.617 SO libspdk_dma.so.5.0 00:05:30.617 LIB libspdk_ioat.a 00:05:30.617 CC lib/util/file.o 00:05:30.617 SO libspdk_ioat.so.7.0 00:05:30.617 SYMLINK libspdk_dma.so 00:05:30.617 CC lib/util/hexlify.o 00:05:30.617 CC lib/util/iov.o 00:05:30.617 CC lib/util/math.o 00:05:30.617 SYMLINK libspdk_ioat.so 00:05:30.617 LIB libspdk_vfio_user.a 00:05:30.617 CC lib/util/net.o 00:05:30.617 CC lib/util/pipe.o 00:05:30.617 SO libspdk_vfio_user.so.5.0 00:05:30.617 CC lib/util/strerror_tls.o 00:05:30.617 CC lib/util/string.o 00:05:30.617 SYMLINK libspdk_vfio_user.so 00:05:30.617 CC lib/util/uuid.o 00:05:30.617 CC lib/util/xor.o 00:05:30.617 CC lib/util/zipf.o 00:05:30.617 CC lib/util/md5.o 00:05:30.617 LIB libspdk_util.a 00:05:30.617 SO libspdk_util.so.10.0 00:05:30.617 SYMLINK libspdk_util.so 00:05:30.617 LIB libspdk_trace_parser.a 00:05:30.617 SO libspdk_trace_parser.so.6.0 00:05:30.617 SYMLINK libspdk_trace_parser.so 00:05:30.617 CC lib/rdma_utils/rdma_utils.o 00:05:30.617 CC lib/conf/conf.o 00:05:30.617 CC lib/env_dpdk/env.o 00:05:30.617 CC lib/rdma_provider/common.o 00:05:30.617 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:30.617 CC lib/vmd/vmd.o 00:05:30.617 CC lib/env_dpdk/memory.o 00:05:30.617 CC lib/idxd/idxd_user.o 00:05:30.617 CC lib/idxd/idxd.o 00:05:30.617 CC lib/json/json_parse.o 00:05:30.617 LIB libspdk_rdma_provider.a 00:05:30.617 CC lib/json/json_util.o 00:05:30.617 SO libspdk_rdma_provider.so.6.0 00:05:30.617 LIB libspdk_conf.a 00:05:30.617 CC lib/vmd/led.o 00:05:30.617 SO libspdk_conf.so.6.0 00:05:30.617 CC lib/env_dpdk/pci.o 00:05:30.617 LIB libspdk_rdma_utils.a 00:05:30.617 SYMLINK libspdk_rdma_provider.so 00:05:30.617 CC lib/env_dpdk/init.o 00:05:30.617 SYMLINK libspdk_conf.so 00:05:30.617 CC lib/json/json_write.o 00:05:30.617 SO libspdk_rdma_utils.so.1.0 00:05:30.617 SYMLINK libspdk_rdma_utils.so 00:05:30.617 CC lib/env_dpdk/threads.o 00:05:30.617 CC lib/env_dpdk/pci_ioat.o 00:05:30.617 CC lib/idxd/idxd_kernel.o 00:05:30.617 CC lib/env_dpdk/pci_virtio.o 00:05:30.617 CC lib/env_dpdk/pci_vmd.o 00:05:30.617 CC lib/env_dpdk/pci_idxd.o 00:05:30.617 LIB libspdk_json.a 00:05:30.617 LIB libspdk_idxd.a 00:05:30.617 CC lib/env_dpdk/pci_event.o 00:05:30.617 LIB libspdk_vmd.a 00:05:30.617 SO libspdk_json.so.6.0 00:05:30.617 SO libspdk_idxd.so.12.1 00:05:30.617 SO libspdk_vmd.so.6.0 00:05:30.617 CC lib/env_dpdk/sigbus_handler.o 00:05:30.617 SYMLINK libspdk_json.so 00:05:30.617 CC lib/env_dpdk/pci_dpdk.o 00:05:30.617 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:30.617 SYMLINK libspdk_idxd.so 00:05:30.617 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:30.617 SYMLINK libspdk_vmd.so 00:05:30.617 CC lib/jsonrpc/jsonrpc_server.o 00:05:30.617 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:30.617 CC lib/jsonrpc/jsonrpc_client.o 00:05:30.617 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:30.617 LIB libspdk_jsonrpc.a 00:05:30.617 SO libspdk_jsonrpc.so.6.0 00:05:30.617 SYMLINK libspdk_jsonrpc.so 00:05:30.617 LIB libspdk_env_dpdk.a 00:05:30.617 SO libspdk_env_dpdk.so.15.0 00:05:30.617 CC lib/rpc/rpc.o 00:05:30.617 SYMLINK libspdk_env_dpdk.so 00:05:30.617 LIB libspdk_rpc.a 00:05:30.617 SO libspdk_rpc.so.6.0 00:05:30.617 SYMLINK libspdk_rpc.so 00:05:30.617 CC lib/keyring/keyring_rpc.o 00:05:30.617 CC lib/keyring/keyring.o 00:05:30.617 CC lib/notify/notify.o 00:05:30.617 CC lib/notify/notify_rpc.o 00:05:30.617 CC lib/trace/trace_flags.o 00:05:30.617 CC lib/trace/trace.o 00:05:30.617 CC lib/trace/trace_rpc.o 00:05:30.617 LIB libspdk_notify.a 00:05:30.617 LIB libspdk_keyring.a 00:05:30.617 SO libspdk_notify.so.6.0 00:05:30.617 LIB libspdk_trace.a 00:05:30.617 SO libspdk_keyring.so.2.0 00:05:30.617 SYMLINK libspdk_notify.so 00:05:30.617 SO libspdk_trace.so.11.0 00:05:30.617 SYMLINK libspdk_keyring.so 00:05:30.617 SYMLINK libspdk_trace.so 00:05:30.617 CC lib/thread/thread.o 00:05:30.617 CC lib/sock/sock.o 00:05:30.617 CC lib/thread/iobuf.o 00:05:30.617 CC lib/sock/sock_rpc.o 00:05:30.617 LIB libspdk_sock.a 00:05:30.617 SO libspdk_sock.so.10.0 00:05:30.617 SYMLINK libspdk_sock.so 00:05:30.876 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:30.876 CC lib/nvme/nvme_ctrlr.o 00:05:30.876 CC lib/nvme/nvme_ns_cmd.o 00:05:30.876 CC lib/nvme/nvme_fabric.o 00:05:30.876 CC lib/nvme/nvme_pcie_common.o 00:05:30.876 CC lib/nvme/nvme_ns.o 00:05:30.876 CC lib/nvme/nvme_qpair.o 00:05:30.876 CC lib/nvme/nvme_pcie.o 00:05:30.876 CC lib/nvme/nvme.o 00:05:31.444 LIB libspdk_thread.a 00:05:31.444 SO libspdk_thread.so.10.1 00:05:31.703 SYMLINK libspdk_thread.so 00:05:31.703 CC lib/nvme/nvme_quirks.o 00:05:31.703 CC lib/nvme/nvme_transport.o 00:05:31.703 CC lib/nvme/nvme_discovery.o 00:05:31.703 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:31.703 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:31.703 CC lib/nvme/nvme_tcp.o 00:05:31.962 CC lib/nvme/nvme_opal.o 00:05:31.962 CC lib/nvme/nvme_io_msg.o 00:05:31.962 CC lib/nvme/nvme_poll_group.o 00:05:32.529 CC lib/nvme/nvme_zns.o 00:05:32.529 CC lib/accel/accel.o 00:05:32.529 CC lib/nvme/nvme_stubs.o 00:05:32.529 CC lib/blob/blobstore.o 00:05:32.529 CC lib/blob/request.o 00:05:32.529 CC lib/blob/zeroes.o 00:05:32.529 CC lib/init/json_config.o 00:05:32.787 CC lib/init/subsystem.o 00:05:32.787 CC lib/init/subsystem_rpc.o 00:05:32.787 CC lib/init/rpc.o 00:05:32.787 CC lib/blob/blob_bs_dev.o 00:05:33.045 CC lib/accel/accel_rpc.o 00:05:33.045 CC lib/accel/accel_sw.o 00:05:33.045 LIB libspdk_init.a 00:05:33.045 CC lib/nvme/nvme_auth.o 00:05:33.045 SO libspdk_init.so.6.0 00:05:33.045 SYMLINK libspdk_init.so 00:05:33.045 CC lib/nvme/nvme_cuse.o 00:05:33.304 CC lib/virtio/virtio.o 00:05:33.304 CC lib/virtio/virtio_vhost_user.o 00:05:33.304 CC lib/nvme/nvme_vfio_user.o 00:05:33.304 CC lib/vfu_tgt/tgt_endpoint.o 00:05:33.304 CC lib/fsdev/fsdev.o 00:05:33.304 CC lib/fsdev/fsdev_io.o 00:05:33.599 CC lib/vfu_tgt/tgt_rpc.o 00:05:33.599 CC lib/virtio/virtio_vfio_user.o 00:05:33.599 LIB libspdk_accel.a 00:05:33.599 SO libspdk_accel.so.16.0 00:05:33.599 CC lib/nvme/nvme_rdma.o 00:05:33.599 SYMLINK libspdk_accel.so 00:05:33.858 LIB libspdk_vfu_tgt.a 00:05:33.858 SO libspdk_vfu_tgt.so.3.0 00:05:33.858 CC lib/virtio/virtio_pci.o 00:05:33.858 SYMLINK libspdk_vfu_tgt.so 00:05:33.858 CC lib/fsdev/fsdev_rpc.o 00:05:33.858 CC lib/event/app.o 00:05:33.858 CC lib/event/reactor.o 00:05:33.858 CC lib/event/log_rpc.o 00:05:34.116 CC lib/bdev/bdev.o 00:05:34.116 LIB libspdk_fsdev.a 00:05:34.116 SO libspdk_fsdev.so.1.0 00:05:34.116 CC lib/bdev/bdev_rpc.o 00:05:34.116 CC lib/event/app_rpc.o 00:05:34.116 SYMLINK libspdk_fsdev.so 00:05:34.116 CC lib/event/scheduler_static.o 00:05:34.116 LIB libspdk_virtio.a 00:05:34.116 SO libspdk_virtio.so.7.0 00:05:34.374 SYMLINK libspdk_virtio.so 00:05:34.374 CC lib/bdev/bdev_zone.o 00:05:34.374 CC lib/bdev/part.o 00:05:34.374 CC lib/bdev/scsi_nvme.o 00:05:34.374 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:34.374 LIB libspdk_event.a 00:05:34.374 SO libspdk_event.so.14.0 00:05:34.633 SYMLINK libspdk_event.so 00:05:34.892 LIB libspdk_fuse_dispatcher.a 00:05:35.149 SO libspdk_fuse_dispatcher.so.1.0 00:05:35.149 SYMLINK libspdk_fuse_dispatcher.so 00:05:35.149 LIB libspdk_nvme.a 00:05:35.407 SO libspdk_nvme.so.14.0 00:05:35.665 SYMLINK libspdk_nvme.so 00:05:35.665 LIB libspdk_blob.a 00:05:35.665 SO libspdk_blob.so.11.0 00:05:35.923 SYMLINK libspdk_blob.so 00:05:36.181 CC lib/blobfs/blobfs.o 00:05:36.181 CC lib/blobfs/tree.o 00:05:36.181 CC lib/lvol/lvol.o 00:05:36.748 LIB libspdk_bdev.a 00:05:37.006 SO libspdk_bdev.so.16.0 00:05:37.006 SYMLINK libspdk_bdev.so 00:05:37.006 LIB libspdk_blobfs.a 00:05:37.006 LIB libspdk_lvol.a 00:05:37.006 SO libspdk_blobfs.so.10.0 00:05:37.006 SO libspdk_lvol.so.10.0 00:05:37.006 SYMLINK libspdk_blobfs.so 00:05:37.006 SYMLINK libspdk_lvol.so 00:05:37.264 CC lib/ublk/ublk.o 00:05:37.264 CC lib/nbd/nbd.o 00:05:37.264 CC lib/nbd/nbd_rpc.o 00:05:37.264 CC lib/ublk/ublk_rpc.o 00:05:37.264 CC lib/scsi/dev.o 00:05:37.264 CC lib/scsi/lun.o 00:05:37.264 CC lib/scsi/scsi.o 00:05:37.264 CC lib/nvmf/ctrlr.o 00:05:37.264 CC lib/scsi/port.o 00:05:37.264 CC lib/ftl/ftl_core.o 00:05:37.264 CC lib/scsi/scsi_bdev.o 00:05:37.264 CC lib/nvmf/ctrlr_discovery.o 00:05:37.264 CC lib/scsi/scsi_pr.o 00:05:37.264 CC lib/scsi/scsi_rpc.o 00:05:37.523 CC lib/nvmf/ctrlr_bdev.o 00:05:37.523 CC lib/scsi/task.o 00:05:37.523 CC lib/ftl/ftl_init.o 00:05:37.523 CC lib/ftl/ftl_layout.o 00:05:37.781 LIB libspdk_nbd.a 00:05:37.781 SO libspdk_nbd.so.7.0 00:05:37.781 CC lib/nvmf/subsystem.o 00:05:37.781 CC lib/nvmf/nvmf.o 00:05:37.781 SYMLINK libspdk_nbd.so 00:05:37.781 CC lib/nvmf/nvmf_rpc.o 00:05:37.781 CC lib/ftl/ftl_debug.o 00:05:37.781 LIB libspdk_scsi.a 00:05:37.781 LIB libspdk_ublk.a 00:05:38.039 CC lib/ftl/ftl_io.o 00:05:38.039 SO libspdk_scsi.so.9.0 00:05:38.039 SO libspdk_ublk.so.3.0 00:05:38.039 CC lib/nvmf/transport.o 00:05:38.039 SYMLINK libspdk_ublk.so 00:05:38.039 CC lib/ftl/ftl_sb.o 00:05:38.039 SYMLINK libspdk_scsi.so 00:05:38.039 CC lib/nvmf/tcp.o 00:05:38.296 CC lib/iscsi/conn.o 00:05:38.296 CC lib/nvmf/stubs.o 00:05:38.296 CC lib/ftl/ftl_l2p.o 00:05:38.296 CC lib/vhost/vhost.o 00:05:38.554 CC lib/ftl/ftl_l2p_flat.o 00:05:38.554 CC lib/ftl/ftl_nv_cache.o 00:05:38.554 CC lib/ftl/ftl_band.o 00:05:38.813 CC lib/nvmf/mdns_server.o 00:05:38.813 CC lib/nvmf/vfio_user.o 00:05:38.813 CC lib/iscsi/init_grp.o 00:05:38.813 CC lib/iscsi/iscsi.o 00:05:39.070 CC lib/iscsi/param.o 00:05:39.070 CC lib/vhost/vhost_rpc.o 00:05:39.070 CC lib/iscsi/portal_grp.o 00:05:39.070 CC lib/iscsi/tgt_node.o 00:05:39.070 CC lib/nvmf/rdma.o 00:05:39.070 CC lib/iscsi/iscsi_subsystem.o 00:05:39.328 CC lib/iscsi/iscsi_rpc.o 00:05:39.328 CC lib/iscsi/task.o 00:05:39.586 CC lib/nvmf/auth.o 00:05:39.586 CC lib/vhost/vhost_scsi.o 00:05:39.586 CC lib/vhost/vhost_blk.o 00:05:39.586 CC lib/ftl/ftl_band_ops.o 00:05:39.586 CC lib/ftl/ftl_writer.o 00:05:39.844 CC lib/ftl/ftl_rq.o 00:05:39.844 CC lib/vhost/rte_vhost_user.o 00:05:39.844 CC lib/ftl/ftl_reloc.o 00:05:39.844 CC lib/ftl/ftl_l2p_cache.o 00:05:40.102 CC lib/ftl/ftl_p2l.o 00:05:40.360 LIB libspdk_iscsi.a 00:05:40.360 CC lib/ftl/ftl_p2l_log.o 00:05:40.360 SO libspdk_iscsi.so.8.0 00:05:40.360 CC lib/ftl/mngt/ftl_mngt.o 00:05:40.360 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:40.360 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:40.619 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:40.619 SYMLINK libspdk_iscsi.so 00:05:40.619 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:40.619 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:40.619 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:40.619 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:40.619 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:40.877 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:40.877 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:40.877 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:40.877 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:40.877 CC lib/ftl/utils/ftl_conf.o 00:05:40.877 CC lib/ftl/utils/ftl_md.o 00:05:40.877 CC lib/ftl/utils/ftl_mempool.o 00:05:40.877 LIB libspdk_vhost.a 00:05:40.877 CC lib/ftl/utils/ftl_bitmap.o 00:05:40.877 CC lib/ftl/utils/ftl_property.o 00:05:40.877 SO libspdk_vhost.so.8.0 00:05:41.136 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:41.136 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:41.136 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:41.136 SYMLINK libspdk_vhost.so 00:05:41.136 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:41.136 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:41.136 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:41.136 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:41.136 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:41.394 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:41.394 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:41.394 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:41.394 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:41.394 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:41.394 CC lib/ftl/base/ftl_base_dev.o 00:05:41.394 LIB libspdk_nvmf.a 00:05:41.394 CC lib/ftl/base/ftl_base_bdev.o 00:05:41.394 CC lib/ftl/ftl_trace.o 00:05:41.394 SO libspdk_nvmf.so.19.0 00:05:41.653 LIB libspdk_ftl.a 00:05:41.653 SYMLINK libspdk_nvmf.so 00:05:41.912 SO libspdk_ftl.so.9.0 00:05:42.171 SYMLINK libspdk_ftl.so 00:05:42.429 CC module/vfu_device/vfu_virtio.o 00:05:42.429 CC module/env_dpdk/env_dpdk_rpc.o 00:05:42.688 CC module/sock/uring/uring.o 00:05:42.688 CC module/sock/posix/posix.o 00:05:42.688 CC module/accel/error/accel_error.o 00:05:42.688 CC module/fsdev/aio/fsdev_aio.o 00:05:42.688 CC module/blob/bdev/blob_bdev.o 00:05:42.688 CC module/keyring/file/keyring.o 00:05:42.688 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:42.688 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:42.688 LIB libspdk_env_dpdk_rpc.a 00:05:42.688 SO libspdk_env_dpdk_rpc.so.6.0 00:05:42.688 SYMLINK libspdk_env_dpdk_rpc.so 00:05:42.688 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:42.688 CC module/keyring/file/keyring_rpc.o 00:05:42.688 LIB libspdk_scheduler_dpdk_governor.a 00:05:42.946 CC module/accel/error/accel_error_rpc.o 00:05:42.946 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:42.946 LIB libspdk_scheduler_dynamic.a 00:05:42.946 SO libspdk_scheduler_dynamic.so.4.0 00:05:42.946 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:42.946 LIB libspdk_blob_bdev.a 00:05:42.946 CC module/fsdev/aio/linux_aio_mgr.o 00:05:42.946 SYMLINK libspdk_scheduler_dynamic.so 00:05:42.946 LIB libspdk_keyring_file.a 00:05:42.946 SO libspdk_blob_bdev.so.11.0 00:05:42.946 LIB libspdk_accel_error.a 00:05:42.946 SO libspdk_keyring_file.so.2.0 00:05:42.946 SYMLINK libspdk_blob_bdev.so 00:05:42.946 SO libspdk_accel_error.so.2.0 00:05:43.205 SYMLINK libspdk_keyring_file.so 00:05:43.205 SYMLINK libspdk_accel_error.so 00:05:43.205 CC module/scheduler/gscheduler/gscheduler.o 00:05:43.205 CC module/accel/ioat/accel_ioat.o 00:05:43.205 CC module/vfu_device/vfu_virtio_blk.o 00:05:43.205 CC module/vfu_device/vfu_virtio_scsi.o 00:05:43.205 CC module/accel/dsa/accel_dsa.o 00:05:43.205 CC module/keyring/linux/keyring.o 00:05:43.205 LIB libspdk_fsdev_aio.a 00:05:43.205 CC module/accel/iaa/accel_iaa.o 00:05:43.205 LIB libspdk_scheduler_gscheduler.a 00:05:43.205 SO libspdk_fsdev_aio.so.1.0 00:05:43.205 LIB libspdk_sock_uring.a 00:05:43.463 SO libspdk_scheduler_gscheduler.so.4.0 00:05:43.463 SO libspdk_sock_uring.so.5.0 00:05:43.463 LIB libspdk_sock_posix.a 00:05:43.463 CC module/accel/ioat/accel_ioat_rpc.o 00:05:43.463 SYMLINK libspdk_fsdev_aio.so 00:05:43.463 SYMLINK libspdk_scheduler_gscheduler.so 00:05:43.463 CC module/keyring/linux/keyring_rpc.o 00:05:43.463 CC module/accel/iaa/accel_iaa_rpc.o 00:05:43.463 SYMLINK libspdk_sock_uring.so 00:05:43.463 CC module/accel/dsa/accel_dsa_rpc.o 00:05:43.463 SO libspdk_sock_posix.so.6.0 00:05:43.463 CC module/vfu_device/vfu_virtio_rpc.o 00:05:43.463 SYMLINK libspdk_sock_posix.so 00:05:43.463 CC module/vfu_device/vfu_virtio_fs.o 00:05:43.463 LIB libspdk_accel_ioat.a 00:05:43.463 LIB libspdk_keyring_linux.a 00:05:43.463 LIB libspdk_accel_iaa.a 00:05:43.463 LIB libspdk_accel_dsa.a 00:05:43.463 SO libspdk_accel_ioat.so.6.0 00:05:43.463 SO libspdk_keyring_linux.so.1.0 00:05:43.463 SO libspdk_accel_iaa.so.3.0 00:05:43.722 SO libspdk_accel_dsa.so.5.0 00:05:43.722 SYMLINK libspdk_accel_ioat.so 00:05:43.722 SYMLINK libspdk_keyring_linux.so 00:05:43.722 SYMLINK libspdk_accel_iaa.so 00:05:43.722 SYMLINK libspdk_accel_dsa.so 00:05:43.722 CC module/bdev/delay/vbdev_delay.o 00:05:43.722 CC module/bdev/error/vbdev_error.o 00:05:43.722 CC module/bdev/gpt/gpt.o 00:05:43.722 CC module/bdev/error/vbdev_error_rpc.o 00:05:43.722 CC module/blobfs/bdev/blobfs_bdev.o 00:05:43.722 LIB libspdk_vfu_device.a 00:05:43.722 CC module/bdev/lvol/vbdev_lvol.o 00:05:43.723 SO libspdk_vfu_device.so.3.0 00:05:43.723 CC module/bdev/malloc/bdev_malloc.o 00:05:43.723 CC module/bdev/null/bdev_null.o 00:05:43.723 CC module/bdev/nvme/bdev_nvme.o 00:05:43.981 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:43.981 SYMLINK libspdk_vfu_device.so 00:05:43.981 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:43.981 CC module/bdev/gpt/vbdev_gpt.o 00:05:43.981 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:43.981 LIB libspdk_bdev_error.a 00:05:43.981 SO libspdk_bdev_error.so.6.0 00:05:43.981 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:44.240 SYMLINK libspdk_bdev_error.so 00:05:44.240 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:44.240 LIB libspdk_blobfs_bdev.a 00:05:44.240 CC module/bdev/null/bdev_null_rpc.o 00:05:44.240 SO libspdk_blobfs_bdev.so.6.0 00:05:44.240 LIB libspdk_bdev_gpt.a 00:05:44.240 LIB libspdk_bdev_malloc.a 00:05:44.240 SYMLINK libspdk_blobfs_bdev.so 00:05:44.240 CC module/bdev/nvme/nvme_rpc.o 00:05:44.240 SO libspdk_bdev_gpt.so.6.0 00:05:44.240 SO libspdk_bdev_malloc.so.6.0 00:05:44.240 CC module/bdev/passthru/vbdev_passthru.o 00:05:44.240 LIB libspdk_bdev_delay.a 00:05:44.240 LIB libspdk_bdev_null.a 00:05:44.240 SYMLINK libspdk_bdev_malloc.so 00:05:44.240 SO libspdk_bdev_delay.so.6.0 00:05:44.240 SYMLINK libspdk_bdev_gpt.so 00:05:44.240 SO libspdk_bdev_null.so.6.0 00:05:44.499 LIB libspdk_bdev_lvol.a 00:05:44.499 SYMLINK libspdk_bdev_delay.so 00:05:44.499 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:44.499 SYMLINK libspdk_bdev_null.so 00:05:44.499 SO libspdk_bdev_lvol.so.6.0 00:05:44.499 CC module/bdev/nvme/bdev_mdns_client.o 00:05:44.499 CC module/bdev/nvme/vbdev_opal.o 00:05:44.499 SYMLINK libspdk_bdev_lvol.so 00:05:44.499 CC module/bdev/raid/bdev_raid.o 00:05:44.499 CC module/bdev/split/vbdev_split.o 00:05:44.499 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:44.499 CC module/bdev/split/vbdev_split_rpc.o 00:05:44.499 LIB libspdk_bdev_passthru.a 00:05:44.757 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:44.757 SO libspdk_bdev_passthru.so.6.0 00:05:44.757 CC module/bdev/uring/bdev_uring.o 00:05:44.757 SYMLINK libspdk_bdev_passthru.so 00:05:44.757 CC module/bdev/uring/bdev_uring_rpc.o 00:05:44.757 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:44.757 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:44.757 LIB libspdk_bdev_split.a 00:05:44.757 CC module/bdev/raid/bdev_raid_rpc.o 00:05:44.757 SO libspdk_bdev_split.so.6.0 00:05:44.757 SYMLINK libspdk_bdev_split.so 00:05:45.016 CC module/bdev/raid/bdev_raid_sb.o 00:05:45.016 CC module/bdev/raid/raid0.o 00:05:45.016 LIB libspdk_bdev_zone_block.a 00:05:45.016 SO libspdk_bdev_zone_block.so.6.0 00:05:45.016 CC module/bdev/raid/raid1.o 00:05:45.016 CC module/bdev/aio/bdev_aio.o 00:05:45.016 SYMLINK libspdk_bdev_zone_block.so 00:05:45.016 CC module/bdev/ftl/bdev_ftl.o 00:05:45.016 LIB libspdk_bdev_uring.a 00:05:45.016 CC module/bdev/iscsi/bdev_iscsi.o 00:05:45.274 SO libspdk_bdev_uring.so.6.0 00:05:45.274 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:45.274 CC module/bdev/raid/concat.o 00:05:45.274 SYMLINK libspdk_bdev_uring.so 00:05:45.274 CC module/bdev/aio/bdev_aio_rpc.o 00:05:45.274 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:45.274 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:45.274 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:45.274 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:45.532 LIB libspdk_bdev_ftl.a 00:05:45.532 LIB libspdk_bdev_aio.a 00:05:45.532 SO libspdk_bdev_ftl.so.6.0 00:05:45.532 SO libspdk_bdev_aio.so.6.0 00:05:45.532 SYMLINK libspdk_bdev_ftl.so 00:05:45.532 SYMLINK libspdk_bdev_aio.so 00:05:45.532 LIB libspdk_bdev_raid.a 00:05:45.532 LIB libspdk_bdev_iscsi.a 00:05:45.532 SO libspdk_bdev_raid.so.6.0 00:05:45.532 SO libspdk_bdev_iscsi.so.6.0 00:05:45.791 SYMLINK libspdk_bdev_iscsi.so 00:05:45.791 SYMLINK libspdk_bdev_raid.so 00:05:45.791 LIB libspdk_bdev_virtio.a 00:05:45.791 SO libspdk_bdev_virtio.so.6.0 00:05:45.791 SYMLINK libspdk_bdev_virtio.so 00:05:46.062 LIB libspdk_bdev_nvme.a 00:05:46.357 SO libspdk_bdev_nvme.so.7.0 00:05:46.357 SYMLINK libspdk_bdev_nvme.so 00:05:46.931 CC module/event/subsystems/vmd/vmd.o 00:05:46.931 CC module/event/subsystems/sock/sock.o 00:05:46.931 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:46.931 CC module/event/subsystems/keyring/keyring.o 00:05:46.931 CC module/event/subsystems/scheduler/scheduler.o 00:05:46.931 CC module/event/subsystems/fsdev/fsdev.o 00:05:46.931 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:46.931 CC module/event/subsystems/iobuf/iobuf.o 00:05:46.931 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:46.931 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:46.931 LIB libspdk_event_scheduler.a 00:05:46.931 LIB libspdk_event_fsdev.a 00:05:46.931 LIB libspdk_event_keyring.a 00:05:46.931 SO libspdk_event_scheduler.so.4.0 00:05:46.931 SO libspdk_event_fsdev.so.1.0 00:05:46.931 LIB libspdk_event_sock.a 00:05:46.931 LIB libspdk_event_vmd.a 00:05:46.931 SO libspdk_event_keyring.so.1.0 00:05:46.931 LIB libspdk_event_vfu_tgt.a 00:05:46.931 LIB libspdk_event_vhost_blk.a 00:05:46.931 LIB libspdk_event_iobuf.a 00:05:46.931 SO libspdk_event_sock.so.5.0 00:05:46.931 SO libspdk_event_vmd.so.6.0 00:05:46.931 SYMLINK libspdk_event_fsdev.so 00:05:46.931 SYMLINK libspdk_event_scheduler.so 00:05:46.931 SO libspdk_event_vhost_blk.so.3.0 00:05:46.931 SO libspdk_event_vfu_tgt.so.3.0 00:05:46.931 SO libspdk_event_iobuf.so.3.0 00:05:46.931 SYMLINK libspdk_event_keyring.so 00:05:46.931 SYMLINK libspdk_event_sock.so 00:05:46.931 SYMLINK libspdk_event_vhost_blk.so 00:05:46.931 SYMLINK libspdk_event_vmd.so 00:05:46.931 SYMLINK libspdk_event_vfu_tgt.so 00:05:46.931 SYMLINK libspdk_event_iobuf.so 00:05:47.190 CC module/event/subsystems/accel/accel.o 00:05:47.449 LIB libspdk_event_accel.a 00:05:47.449 SO libspdk_event_accel.so.6.0 00:05:47.449 SYMLINK libspdk_event_accel.so 00:05:47.707 CC module/event/subsystems/bdev/bdev.o 00:05:47.965 LIB libspdk_event_bdev.a 00:05:47.965 SO libspdk_event_bdev.so.6.0 00:05:48.222 SYMLINK libspdk_event_bdev.so 00:05:48.222 CC module/event/subsystems/ublk/ublk.o 00:05:48.222 CC module/event/subsystems/nbd/nbd.o 00:05:48.222 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:48.222 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:48.222 CC module/event/subsystems/scsi/scsi.o 00:05:48.480 LIB libspdk_event_ublk.a 00:05:48.480 LIB libspdk_event_nbd.a 00:05:48.480 LIB libspdk_event_scsi.a 00:05:48.480 SO libspdk_event_ublk.so.3.0 00:05:48.480 SO libspdk_event_nbd.so.6.0 00:05:48.480 SO libspdk_event_scsi.so.6.0 00:05:48.738 SYMLINK libspdk_event_ublk.so 00:05:48.738 SYMLINK libspdk_event_nbd.so 00:05:48.738 SYMLINK libspdk_event_scsi.so 00:05:48.738 LIB libspdk_event_nvmf.a 00:05:48.738 SO libspdk_event_nvmf.so.6.0 00:05:48.738 SYMLINK libspdk_event_nvmf.so 00:05:48.996 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:48.996 CC module/event/subsystems/iscsi/iscsi.o 00:05:48.996 LIB libspdk_event_vhost_scsi.a 00:05:48.996 LIB libspdk_event_iscsi.a 00:05:48.996 SO libspdk_event_vhost_scsi.so.3.0 00:05:48.996 SO libspdk_event_iscsi.so.6.0 00:05:49.255 SYMLINK libspdk_event_vhost_scsi.so 00:05:49.255 SYMLINK libspdk_event_iscsi.so 00:05:49.255 SO libspdk.so.6.0 00:05:49.255 SYMLINK libspdk.so 00:05:49.513 CXX app/trace/trace.o 00:05:49.513 CC app/trace_record/trace_record.o 00:05:49.513 TEST_HEADER include/spdk/accel.h 00:05:49.513 TEST_HEADER include/spdk/accel_module.h 00:05:49.513 TEST_HEADER include/spdk/assert.h 00:05:49.513 TEST_HEADER include/spdk/barrier.h 00:05:49.513 TEST_HEADER include/spdk/base64.h 00:05:49.513 TEST_HEADER include/spdk/bdev.h 00:05:49.513 TEST_HEADER include/spdk/bdev_module.h 00:05:49.513 TEST_HEADER include/spdk/bdev_zone.h 00:05:49.513 TEST_HEADER include/spdk/bit_array.h 00:05:49.513 TEST_HEADER include/spdk/bit_pool.h 00:05:49.513 TEST_HEADER include/spdk/blob_bdev.h 00:05:49.513 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:49.513 TEST_HEADER include/spdk/blobfs.h 00:05:49.513 TEST_HEADER include/spdk/blob.h 00:05:49.514 TEST_HEADER include/spdk/conf.h 00:05:49.514 TEST_HEADER include/spdk/config.h 00:05:49.514 TEST_HEADER include/spdk/cpuset.h 00:05:49.514 TEST_HEADER include/spdk/crc16.h 00:05:49.514 TEST_HEADER include/spdk/crc32.h 00:05:49.514 TEST_HEADER include/spdk/crc64.h 00:05:49.514 TEST_HEADER include/spdk/dif.h 00:05:49.514 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:49.772 TEST_HEADER include/spdk/dma.h 00:05:49.772 TEST_HEADER include/spdk/endian.h 00:05:49.772 TEST_HEADER include/spdk/env_dpdk.h 00:05:49.772 TEST_HEADER include/spdk/env.h 00:05:49.772 TEST_HEADER include/spdk/event.h 00:05:49.772 CC app/nvmf_tgt/nvmf_main.o 00:05:49.772 TEST_HEADER include/spdk/fd_group.h 00:05:49.772 TEST_HEADER include/spdk/fd.h 00:05:49.772 TEST_HEADER include/spdk/file.h 00:05:49.772 TEST_HEADER include/spdk/fsdev.h 00:05:49.772 TEST_HEADER include/spdk/fsdev_module.h 00:05:49.772 TEST_HEADER include/spdk/ftl.h 00:05:49.772 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:49.772 TEST_HEADER include/spdk/gpt_spec.h 00:05:49.772 TEST_HEADER include/spdk/hexlify.h 00:05:49.772 TEST_HEADER include/spdk/histogram_data.h 00:05:49.772 TEST_HEADER include/spdk/idxd.h 00:05:49.772 TEST_HEADER include/spdk/idxd_spec.h 00:05:49.772 TEST_HEADER include/spdk/init.h 00:05:49.772 CC examples/ioat/perf/perf.o 00:05:49.772 TEST_HEADER include/spdk/ioat.h 00:05:49.772 TEST_HEADER include/spdk/ioat_spec.h 00:05:49.772 TEST_HEADER include/spdk/iscsi_spec.h 00:05:49.772 CC examples/util/zipf/zipf.o 00:05:49.772 TEST_HEADER include/spdk/json.h 00:05:49.772 TEST_HEADER include/spdk/jsonrpc.h 00:05:49.772 TEST_HEADER include/spdk/keyring.h 00:05:49.772 TEST_HEADER include/spdk/keyring_module.h 00:05:49.772 TEST_HEADER include/spdk/likely.h 00:05:49.772 TEST_HEADER include/spdk/log.h 00:05:49.772 TEST_HEADER include/spdk/lvol.h 00:05:49.772 CC test/thread/poller_perf/poller_perf.o 00:05:49.772 TEST_HEADER include/spdk/md5.h 00:05:49.772 TEST_HEADER include/spdk/memory.h 00:05:49.772 TEST_HEADER include/spdk/mmio.h 00:05:49.772 TEST_HEADER include/spdk/nbd.h 00:05:49.772 TEST_HEADER include/spdk/net.h 00:05:49.772 TEST_HEADER include/spdk/notify.h 00:05:49.772 TEST_HEADER include/spdk/nvme.h 00:05:49.772 TEST_HEADER include/spdk/nvme_intel.h 00:05:49.772 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:49.772 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:49.772 TEST_HEADER include/spdk/nvme_spec.h 00:05:49.772 TEST_HEADER include/spdk/nvme_zns.h 00:05:49.772 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:49.772 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:49.772 CC test/dma/test_dma/test_dma.o 00:05:49.772 TEST_HEADER include/spdk/nvmf.h 00:05:49.772 TEST_HEADER include/spdk/nvmf_spec.h 00:05:49.773 TEST_HEADER include/spdk/nvmf_transport.h 00:05:49.773 TEST_HEADER include/spdk/opal.h 00:05:49.773 TEST_HEADER include/spdk/opal_spec.h 00:05:49.773 TEST_HEADER include/spdk/pci_ids.h 00:05:49.773 TEST_HEADER include/spdk/pipe.h 00:05:49.773 TEST_HEADER include/spdk/queue.h 00:05:49.773 TEST_HEADER include/spdk/reduce.h 00:05:49.773 CC test/app/bdev_svc/bdev_svc.o 00:05:49.773 TEST_HEADER include/spdk/rpc.h 00:05:49.773 TEST_HEADER include/spdk/scheduler.h 00:05:49.773 TEST_HEADER include/spdk/scsi.h 00:05:49.773 TEST_HEADER include/spdk/scsi_spec.h 00:05:49.773 TEST_HEADER include/spdk/sock.h 00:05:49.773 TEST_HEADER include/spdk/stdinc.h 00:05:49.773 TEST_HEADER include/spdk/string.h 00:05:49.773 TEST_HEADER include/spdk/thread.h 00:05:49.773 TEST_HEADER include/spdk/trace.h 00:05:49.773 TEST_HEADER include/spdk/trace_parser.h 00:05:49.773 TEST_HEADER include/spdk/tree.h 00:05:49.773 TEST_HEADER include/spdk/ublk.h 00:05:49.773 TEST_HEADER include/spdk/util.h 00:05:49.773 TEST_HEADER include/spdk/uuid.h 00:05:49.773 TEST_HEADER include/spdk/version.h 00:05:49.773 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:49.773 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:49.773 TEST_HEADER include/spdk/vhost.h 00:05:49.773 TEST_HEADER include/spdk/vmd.h 00:05:49.773 TEST_HEADER include/spdk/xor.h 00:05:49.773 TEST_HEADER include/spdk/zipf.h 00:05:49.773 CXX test/cpp_headers/accel.o 00:05:49.773 LINK interrupt_tgt 00:05:50.031 LINK zipf 00:05:50.031 LINK spdk_trace_record 00:05:50.031 LINK nvmf_tgt 00:05:50.031 LINK poller_perf 00:05:50.031 LINK ioat_perf 00:05:50.031 LINK bdev_svc 00:05:50.031 LINK spdk_trace 00:05:50.031 CXX test/cpp_headers/accel_module.o 00:05:50.031 CXX test/cpp_headers/assert.o 00:05:50.031 CXX test/cpp_headers/barrier.o 00:05:50.031 CXX test/cpp_headers/base64.o 00:05:50.290 CC examples/ioat/verify/verify.o 00:05:50.290 CXX test/cpp_headers/bdev.o 00:05:50.290 LINK test_dma 00:05:50.290 CC app/iscsi_tgt/iscsi_tgt.o 00:05:50.290 CC test/env/mem_callbacks/mem_callbacks.o 00:05:50.290 CC app/spdk_nvme_perf/perf.o 00:05:50.290 CC app/spdk_lspci/spdk_lspci.o 00:05:50.290 CC app/spdk_tgt/spdk_tgt.o 00:05:50.549 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:50.549 CXX test/cpp_headers/bdev_module.o 00:05:50.549 LINK verify 00:05:50.549 CC examples/thread/thread/thread_ex.o 00:05:50.549 LINK spdk_lspci 00:05:50.549 LINK iscsi_tgt 00:05:50.549 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:50.549 LINK spdk_tgt 00:05:50.808 CXX test/cpp_headers/bdev_zone.o 00:05:50.808 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:50.808 CXX test/cpp_headers/bit_array.o 00:05:50.808 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:50.808 LINK thread 00:05:50.808 LINK nvme_fuzz 00:05:50.808 CXX test/cpp_headers/bit_pool.o 00:05:51.067 CC test/env/vtophys/vtophys.o 00:05:51.067 LINK mem_callbacks 00:05:51.067 CC examples/vmd/lsvmd/lsvmd.o 00:05:51.067 CC examples/sock/hello_world/hello_sock.o 00:05:51.067 CC examples/vmd/led/led.o 00:05:51.067 CXX test/cpp_headers/blob_bdev.o 00:05:51.067 CXX test/cpp_headers/blobfs_bdev.o 00:05:51.067 LINK vtophys 00:05:51.067 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:51.067 LINK vhost_fuzz 00:05:51.067 LINK lsvmd 00:05:51.326 LINK led 00:05:51.326 LINK spdk_nvme_perf 00:05:51.326 CXX test/cpp_headers/blobfs.o 00:05:51.326 LINK hello_sock 00:05:51.326 CXX test/cpp_headers/blob.o 00:05:51.326 LINK env_dpdk_post_init 00:05:51.326 CC test/env/memory/memory_ut.o 00:05:51.326 CXX test/cpp_headers/conf.o 00:05:51.326 CC test/env/pci/pci_ut.o 00:05:51.326 CC test/rpc_client/rpc_client_test.o 00:05:51.326 CXX test/cpp_headers/config.o 00:05:51.585 CXX test/cpp_headers/cpuset.o 00:05:51.585 CXX test/cpp_headers/crc16.o 00:05:51.585 CC app/spdk_nvme_identify/identify.o 00:05:51.585 CC app/spdk_nvme_discover/discovery_aer.o 00:05:51.585 LINK rpc_client_test 00:05:51.585 CC examples/idxd/perf/perf.o 00:05:51.585 CXX test/cpp_headers/crc32.o 00:05:51.585 CC app/spdk_top/spdk_top.o 00:05:51.844 CC app/vhost/vhost.o 00:05:51.844 CXX test/cpp_headers/crc64.o 00:05:51.844 LINK spdk_nvme_discover 00:05:51.844 LINK pci_ut 00:05:51.844 CC app/spdk_dd/spdk_dd.o 00:05:51.844 CXX test/cpp_headers/dif.o 00:05:52.102 LINK vhost 00:05:52.102 LINK idxd_perf 00:05:52.102 CXX test/cpp_headers/dma.o 00:05:52.102 CC app/fio/nvme/fio_plugin.o 00:05:52.102 CXX test/cpp_headers/endian.o 00:05:52.361 LINK iscsi_fuzz 00:05:52.361 LINK spdk_nvme_identify 00:05:52.361 CC test/accel/dif/dif.o 00:05:52.361 CC examples/accel/perf/accel_perf.o 00:05:52.361 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:52.361 CXX test/cpp_headers/env_dpdk.o 00:05:52.361 LINK spdk_dd 00:05:52.619 LINK spdk_top 00:05:52.619 LINK memory_ut 00:05:52.619 CC test/app/histogram_perf/histogram_perf.o 00:05:52.619 CXX test/cpp_headers/env.o 00:05:52.619 LINK hello_fsdev 00:05:52.619 LINK spdk_nvme 00:05:52.619 CXX test/cpp_headers/event.o 00:05:52.619 CC test/app/jsoncat/jsoncat.o 00:05:52.878 LINK histogram_perf 00:05:52.878 CC examples/blob/hello_world/hello_blob.o 00:05:52.878 CC test/app/stub/stub.o 00:05:52.878 LINK accel_perf 00:05:52.878 CXX test/cpp_headers/fd_group.o 00:05:52.878 LINK jsoncat 00:05:52.878 CC app/fio/bdev/fio_plugin.o 00:05:52.878 CC examples/blob/cli/blobcli.o 00:05:52.878 LINK dif 00:05:53.136 LINK stub 00:05:53.136 LINK hello_blob 00:05:53.136 CXX test/cpp_headers/fd.o 00:05:53.136 CC examples/nvme/hello_world/hello_world.o 00:05:53.136 CC examples/nvme/reconnect/reconnect.o 00:05:53.136 CC test/blobfs/mkfs/mkfs.o 00:05:53.136 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:53.136 CXX test/cpp_headers/file.o 00:05:53.136 CXX test/cpp_headers/fsdev.o 00:05:53.136 CXX test/cpp_headers/fsdev_module.o 00:05:53.395 LINK hello_world 00:05:53.395 LINK mkfs 00:05:53.395 CXX test/cpp_headers/ftl.o 00:05:53.395 LINK blobcli 00:05:53.395 LINK spdk_bdev 00:05:53.395 CXX test/cpp_headers/fuse_dispatcher.o 00:05:53.395 CC examples/bdev/hello_world/hello_bdev.o 00:05:53.395 LINK reconnect 00:05:53.395 CXX test/cpp_headers/gpt_spec.o 00:05:53.395 CC examples/bdev/bdevperf/bdevperf.o 00:05:53.654 CXX test/cpp_headers/hexlify.o 00:05:53.654 CXX test/cpp_headers/histogram_data.o 00:05:53.654 LINK nvme_manage 00:05:53.654 CXX test/cpp_headers/idxd.o 00:05:53.654 LINK hello_bdev 00:05:53.654 CC examples/nvme/arbitration/arbitration.o 00:05:53.654 CXX test/cpp_headers/idxd_spec.o 00:05:53.913 CXX test/cpp_headers/init.o 00:05:53.913 CC test/event/event_perf/event_perf.o 00:05:53.913 CC test/event/reactor/reactor.o 00:05:53.913 CC test/nvme/aer/aer.o 00:05:53.913 CC test/lvol/esnap/esnap.o 00:05:53.913 CC test/event/reactor_perf/reactor_perf.o 00:05:53.913 LINK event_perf 00:05:53.913 LINK reactor 00:05:53.913 CXX test/cpp_headers/ioat.o 00:05:53.913 CC examples/nvme/hotplug/hotplug.o 00:05:53.913 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:54.171 LINK arbitration 00:05:54.171 CXX test/cpp_headers/ioat_spec.o 00:05:54.171 LINK reactor_perf 00:05:54.171 LINK aer 00:05:54.171 LINK cmb_copy 00:05:54.171 CC test/nvme/reset/reset.o 00:05:54.171 LINK hotplug 00:05:54.171 CXX test/cpp_headers/iscsi_spec.o 00:05:54.171 CXX test/cpp_headers/json.o 00:05:54.430 CC test/bdev/bdevio/bdevio.o 00:05:54.430 LINK bdevperf 00:05:54.430 CC test/nvme/sgl/sgl.o 00:05:54.430 CC test/event/app_repeat/app_repeat.o 00:05:54.430 CXX test/cpp_headers/jsonrpc.o 00:05:54.430 CC examples/nvme/abort/abort.o 00:05:54.430 LINK reset 00:05:54.430 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:54.688 LINK app_repeat 00:05:54.688 CC test/event/scheduler/scheduler.o 00:05:54.688 LINK sgl 00:05:54.688 CC test/nvme/e2edp/nvme_dp.o 00:05:54.688 CXX test/cpp_headers/keyring.o 00:05:54.688 LINK bdevio 00:05:54.688 LINK pmr_persistence 00:05:54.688 CC test/nvme/overhead/overhead.o 00:05:54.947 CC test/nvme/err_injection/err_injection.o 00:05:54.947 LINK scheduler 00:05:54.947 CXX test/cpp_headers/keyring_module.o 00:05:54.947 LINK abort 00:05:54.947 CC test/nvme/startup/startup.o 00:05:54.947 CXX test/cpp_headers/likely.o 00:05:54.947 CXX test/cpp_headers/log.o 00:05:54.947 LINK nvme_dp 00:05:54.947 CXX test/cpp_headers/lvol.o 00:05:54.947 LINK err_injection 00:05:55.206 LINK startup 00:05:55.206 CXX test/cpp_headers/md5.o 00:05:55.206 LINK overhead 00:05:55.206 CC test/nvme/reserve/reserve.o 00:05:55.206 CC test/nvme/connect_stress/connect_stress.o 00:05:55.206 CC test/nvme/simple_copy/simple_copy.o 00:05:55.206 CXX test/cpp_headers/memory.o 00:05:55.206 CC examples/nvmf/nvmf/nvmf.o 00:05:55.206 CC test/nvme/boot_partition/boot_partition.o 00:05:55.465 CC test/nvme/compliance/nvme_compliance.o 00:05:55.465 CXX test/cpp_headers/mmio.o 00:05:55.465 LINK reserve 00:05:55.465 CC test/nvme/fused_ordering/fused_ordering.o 00:05:55.465 LINK connect_stress 00:05:55.465 LINK simple_copy 00:05:55.465 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:55.465 LINK boot_partition 00:05:55.465 CXX test/cpp_headers/nbd.o 00:05:55.724 CXX test/cpp_headers/net.o 00:05:55.724 LINK nvmf 00:05:55.724 CXX test/cpp_headers/notify.o 00:05:55.724 LINK fused_ordering 00:05:55.724 CC test/nvme/fdp/fdp.o 00:05:55.724 CXX test/cpp_headers/nvme.o 00:05:55.724 CC test/nvme/cuse/cuse.o 00:05:55.724 LINK doorbell_aers 00:05:55.724 LINK nvme_compliance 00:05:55.724 CXX test/cpp_headers/nvme_intel.o 00:05:55.724 CXX test/cpp_headers/nvme_ocssd.o 00:05:55.724 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:55.724 CXX test/cpp_headers/nvme_spec.o 00:05:55.724 CXX test/cpp_headers/nvme_zns.o 00:05:55.983 CXX test/cpp_headers/nvmf_cmd.o 00:05:55.983 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:55.983 CXX test/cpp_headers/nvmf.o 00:05:55.983 LINK fdp 00:05:55.983 CXX test/cpp_headers/nvmf_spec.o 00:05:55.983 CXX test/cpp_headers/nvmf_transport.o 00:05:55.983 CXX test/cpp_headers/opal.o 00:05:55.983 CXX test/cpp_headers/opal_spec.o 00:05:55.983 CXX test/cpp_headers/pci_ids.o 00:05:55.983 CXX test/cpp_headers/pipe.o 00:05:56.243 CXX test/cpp_headers/queue.o 00:05:56.243 CXX test/cpp_headers/reduce.o 00:05:56.243 CXX test/cpp_headers/rpc.o 00:05:56.243 CXX test/cpp_headers/scheduler.o 00:05:56.243 CXX test/cpp_headers/scsi.o 00:05:56.243 CXX test/cpp_headers/scsi_spec.o 00:05:56.243 CXX test/cpp_headers/sock.o 00:05:56.243 CXX test/cpp_headers/stdinc.o 00:05:56.243 CXX test/cpp_headers/string.o 00:05:56.243 CXX test/cpp_headers/thread.o 00:05:56.243 CXX test/cpp_headers/trace.o 00:05:56.243 CXX test/cpp_headers/trace_parser.o 00:05:56.502 CXX test/cpp_headers/tree.o 00:05:56.502 CXX test/cpp_headers/ublk.o 00:05:56.502 CXX test/cpp_headers/util.o 00:05:56.502 CXX test/cpp_headers/uuid.o 00:05:56.502 CXX test/cpp_headers/version.o 00:05:56.502 CXX test/cpp_headers/vfio_user_pci.o 00:05:56.502 CXX test/cpp_headers/vfio_user_spec.o 00:05:56.502 CXX test/cpp_headers/vhost.o 00:05:56.502 CXX test/cpp_headers/vmd.o 00:05:56.502 CXX test/cpp_headers/xor.o 00:05:56.502 CXX test/cpp_headers/zipf.o 00:05:57.071 LINK cuse 00:05:58.974 LINK esnap 00:05:58.974 00:05:58.974 real 1m27.816s 00:05:58.974 user 7m9.031s 00:05:58.974 sys 1m10.297s 00:05:58.974 02:10:00 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:58.974 02:10:00 make -- common/autotest_common.sh@10 -- $ set +x 00:05:58.974 ************************************ 00:05:58.974 END TEST make 00:05:58.974 ************************************ 00:05:58.975 02:10:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:58.975 02:10:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:58.975 02:10:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:58.975 02:10:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.975 02:10:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:59.234 02:10:00 -- pm/common@44 -- $ pid=6030 00:05:59.234 02:10:00 -- pm/common@50 -- $ kill -TERM 6030 00:05:59.234 02:10:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:59.234 02:10:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:59.234 02:10:00 -- pm/common@44 -- $ pid=6032 00:05:59.234 02:10:00 -- pm/common@50 -- $ kill -TERM 6032 00:05:59.234 02:10:00 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:59.234 02:10:00 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:59.234 02:10:00 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:59.234 02:10:01 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:59.234 02:10:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.234 02:10:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.234 02:10:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.234 02:10:01 -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.234 02:10:01 -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.234 02:10:01 -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.234 02:10:01 -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.234 02:10:01 -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.234 02:10:01 -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.234 02:10:01 -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.234 02:10:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.234 02:10:01 -- scripts/common.sh@344 -- # case "$op" in 00:05:59.234 02:10:01 -- scripts/common.sh@345 -- # : 1 00:05:59.234 02:10:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.234 02:10:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.234 02:10:01 -- scripts/common.sh@365 -- # decimal 1 00:05:59.234 02:10:01 -- scripts/common.sh@353 -- # local d=1 00:05:59.234 02:10:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.234 02:10:01 -- scripts/common.sh@355 -- # echo 1 00:05:59.234 02:10:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.234 02:10:01 -- scripts/common.sh@366 -- # decimal 2 00:05:59.234 02:10:01 -- scripts/common.sh@353 -- # local d=2 00:05:59.234 02:10:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.234 02:10:01 -- scripts/common.sh@355 -- # echo 2 00:05:59.234 02:10:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.234 02:10:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.234 02:10:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.234 02:10:01 -- scripts/common.sh@368 -- # return 0 00:05:59.234 02:10:01 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.234 02:10:01 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:59.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.234 --rc genhtml_branch_coverage=1 00:05:59.234 --rc genhtml_function_coverage=1 00:05:59.234 --rc genhtml_legend=1 00:05:59.234 --rc geninfo_all_blocks=1 00:05:59.234 --rc geninfo_unexecuted_blocks=1 00:05:59.234 00:05:59.234 ' 00:05:59.234 02:10:01 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:59.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.234 --rc genhtml_branch_coverage=1 00:05:59.234 --rc genhtml_function_coverage=1 00:05:59.234 --rc genhtml_legend=1 00:05:59.234 --rc geninfo_all_blocks=1 00:05:59.234 --rc geninfo_unexecuted_blocks=1 00:05:59.234 00:05:59.234 ' 00:05:59.234 02:10:01 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:59.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.234 --rc genhtml_branch_coverage=1 00:05:59.234 --rc genhtml_function_coverage=1 00:05:59.234 --rc genhtml_legend=1 00:05:59.234 --rc geninfo_all_blocks=1 00:05:59.234 --rc geninfo_unexecuted_blocks=1 00:05:59.234 00:05:59.234 ' 00:05:59.234 02:10:01 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:59.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.234 --rc genhtml_branch_coverage=1 00:05:59.234 --rc genhtml_function_coverage=1 00:05:59.234 --rc genhtml_legend=1 00:05:59.235 --rc geninfo_all_blocks=1 00:05:59.235 --rc geninfo_unexecuted_blocks=1 00:05:59.235 00:05:59.235 ' 00:05:59.235 02:10:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:59.235 02:10:01 -- nvmf/common.sh@7 -- # uname -s 00:05:59.235 02:10:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.235 02:10:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.235 02:10:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.235 02:10:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.235 02:10:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.235 02:10:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.235 02:10:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.235 02:10:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.235 02:10:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.235 02:10:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.235 02:10:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:05:59.235 02:10:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:05:59.235 02:10:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.235 02:10:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.235 02:10:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:59.235 02:10:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.235 02:10:01 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:59.235 02:10:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.235 02:10:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.235 02:10:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.235 02:10:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.235 02:10:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.235 02:10:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.235 02:10:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.235 02:10:01 -- paths/export.sh@5 -- # export PATH 00:05:59.235 02:10:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.235 02:10:01 -- nvmf/common.sh@51 -- # : 0 00:05:59.235 02:10:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:59.235 02:10:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:59.235 02:10:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.235 02:10:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.235 02:10:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.235 02:10:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:59.235 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:59.235 02:10:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:59.235 02:10:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:59.235 02:10:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:59.235 02:10:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:59.235 02:10:01 -- spdk/autotest.sh@32 -- # uname -s 00:05:59.235 02:10:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:59.235 02:10:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:59.235 02:10:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:59.235 02:10:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:59.235 02:10:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:59.235 02:10:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:59.494 02:10:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:59.494 02:10:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:59.494 02:10:01 -- spdk/autotest.sh@48 -- # udevadm_pid=67572 00:05:59.494 02:10:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:59.494 02:10:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:59.494 02:10:01 -- pm/common@17 -- # local monitor 00:05:59.494 02:10:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:59.494 02:10:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:59.494 02:10:01 -- pm/common@25 -- # sleep 1 00:05:59.494 02:10:01 -- pm/common@21 -- # date +%s 00:05:59.494 02:10:01 -- pm/common@21 -- # date +%s 00:05:59.494 02:10:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731031801 00:05:59.494 02:10:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731031801 00:05:59.494 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731031801_collect-cpu-load.pm.log 00:05:59.494 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731031801_collect-vmstat.pm.log 00:06:00.431 02:10:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:00.431 02:10:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:00.431 02:10:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:00.431 02:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.431 02:10:02 -- spdk/autotest.sh@59 -- # create_test_list 00:06:00.431 02:10:02 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:00.431 02:10:02 -- common/autotest_common.sh@10 -- # set +x 00:06:00.431 02:10:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:00.431 02:10:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:00.431 02:10:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:00.431 02:10:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:00.431 02:10:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:00.431 02:10:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:00.431 02:10:02 -- common/autotest_common.sh@1455 -- # uname 00:06:00.431 02:10:02 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:00.431 02:10:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:00.431 02:10:02 -- common/autotest_common.sh@1475 -- # uname 00:06:00.431 02:10:02 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:00.431 02:10:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:00.431 02:10:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:00.431 lcov: LCOV version 1.15 00:06:00.431 02:10:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:15.311 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:15.311 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:30.221 02:10:31 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:30.221 02:10:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:30.221 02:10:31 -- common/autotest_common.sh@10 -- # set +x 00:06:30.221 02:10:31 -- spdk/autotest.sh@78 -- # rm -f 00:06:30.221 02:10:31 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:30.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:30.480 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:30.480 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:30.480 02:10:32 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:30.480 02:10:32 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:30.480 02:10:32 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:30.480 02:10:32 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:30.480 02:10:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:30.480 02:10:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:30.480 02:10:32 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:30.480 02:10:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:30.480 02:10:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:30.480 02:10:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:30.480 02:10:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:30.480 02:10:32 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:30.480 02:10:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:30.480 02:10:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:30.480 02:10:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:30.480 02:10:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:06:30.480 02:10:32 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:06:30.480 02:10:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:30.480 02:10:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:30.480 02:10:32 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:30.480 02:10:32 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:06:30.480 02:10:32 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:06:30.480 02:10:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:30.480 02:10:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:30.480 02:10:32 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:30.480 02:10:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:30.480 02:10:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:30.480 02:10:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:30.480 02:10:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:30.480 02:10:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:30.740 No valid GPT data, bailing 00:06:30.740 02:10:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:30.740 02:10:32 -- scripts/common.sh@394 -- # pt= 00:06:30.740 02:10:32 -- scripts/common.sh@395 -- # return 1 00:06:30.740 02:10:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:30.740 1+0 records in 00:06:30.740 1+0 records out 00:06:30.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00346007 s, 303 MB/s 00:06:30.740 02:10:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:30.740 02:10:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:30.740 02:10:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:30.740 02:10:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:30.740 02:10:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:30.740 No valid GPT data, bailing 00:06:30.740 02:10:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:30.740 02:10:32 -- scripts/common.sh@394 -- # pt= 00:06:30.740 02:10:32 -- scripts/common.sh@395 -- # return 1 00:06:30.740 02:10:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:30.740 1+0 records in 00:06:30.740 1+0 records out 00:06:30.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435513 s, 241 MB/s 00:06:30.740 02:10:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:30.740 02:10:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:30.740 02:10:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:30.740 02:10:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:30.740 02:10:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:30.740 No valid GPT data, bailing 00:06:30.740 02:10:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:30.740 02:10:32 -- scripts/common.sh@394 -- # pt= 00:06:30.740 02:10:32 -- scripts/common.sh@395 -- # return 1 00:06:30.740 02:10:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:30.740 1+0 records in 00:06:30.740 1+0 records out 00:06:30.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.004526 s, 232 MB/s 00:06:30.740 02:10:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:30.740 02:10:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:30.740 02:10:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:30.740 02:10:32 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:30.740 02:10:32 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:30.999 No valid GPT data, bailing 00:06:30.999 02:10:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:30.999 02:10:32 -- scripts/common.sh@394 -- # pt= 00:06:30.999 02:10:32 -- scripts/common.sh@395 -- # return 1 00:06:30.999 02:10:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:30.999 1+0 records in 00:06:30.999 1+0 records out 00:06:30.999 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00316156 s, 332 MB/s 00:06:30.999 02:10:32 -- spdk/autotest.sh@105 -- # sync 00:06:30.999 02:10:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:30.999 02:10:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:30.999 02:10:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:32.902 02:10:34 -- spdk/autotest.sh@111 -- # uname -s 00:06:32.902 02:10:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:32.902 02:10:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:32.902 02:10:34 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:33.468 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:33.468 Hugepages 00:06:33.468 node hugesize free / total 00:06:33.468 node0 1048576kB 0 / 0 00:06:33.468 node0 2048kB 0 / 0 00:06:33.468 00:06:33.468 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:33.468 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:33.727 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:33.727 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:33.727 02:10:35 -- spdk/autotest.sh@117 -- # uname -s 00:06:33.727 02:10:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:33.727 02:10:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:33.727 02:10:35 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:34.294 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:34.552 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:34.552 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:34.552 02:10:36 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:35.489 02:10:37 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:35.489 02:10:37 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:35.489 02:10:37 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:35.489 02:10:37 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:35.489 02:10:37 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:35.489 02:10:37 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:35.489 02:10:37 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:35.489 02:10:37 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:35.489 02:10:37 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:35.747 02:10:37 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:35.747 02:10:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:35.747 02:10:37 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:36.006 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:36.006 Waiting for block devices as requested 00:06:36.006 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.265 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:36.265 02:10:37 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:36.265 02:10:37 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:36.265 02:10:37 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:36.265 02:10:37 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:36.265 02:10:37 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:36.265 02:10:37 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:36.265 02:10:37 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:36.265 02:10:37 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:36.265 02:10:37 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:36.265 02:10:37 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:36.265 02:10:37 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:36.265 02:10:37 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:36.265 02:10:37 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:36.265 02:10:38 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:36.265 02:10:38 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:36.265 02:10:38 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:36.265 02:10:38 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:36.265 02:10:38 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:36.265 02:10:38 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:36.265 02:10:38 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:36.265 02:10:38 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:36.265 02:10:38 -- common/autotest_common.sh@1541 -- # continue 00:06:36.265 02:10:38 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:36.265 02:10:38 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:36.265 02:10:38 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:36.265 02:10:38 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:36.265 02:10:38 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:36.265 02:10:38 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:36.265 02:10:38 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:36.265 02:10:38 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:36.265 02:10:38 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:36.265 02:10:38 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:36.265 02:10:38 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:36.265 02:10:38 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:36.265 02:10:38 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:36.265 02:10:38 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:36.265 02:10:38 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:36.265 02:10:38 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:36.265 02:10:38 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:36.265 02:10:38 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:36.265 02:10:38 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:36.265 02:10:38 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:36.265 02:10:38 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:36.265 02:10:38 -- common/autotest_common.sh@1541 -- # continue 00:06:36.265 02:10:38 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:36.265 02:10:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:36.265 02:10:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.265 02:10:38 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:36.265 02:10:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:36.265 02:10:38 -- common/autotest_common.sh@10 -- # set +x 00:06:36.265 02:10:38 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:37.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:37.202 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:37.202 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:37.202 02:10:38 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:37.202 02:10:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.202 02:10:38 -- common/autotest_common.sh@10 -- # set +x 00:06:37.202 02:10:38 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:37.202 02:10:38 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:37.202 02:10:38 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:37.202 02:10:38 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:37.202 02:10:38 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:37.202 02:10:38 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:37.202 02:10:38 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:37.202 02:10:38 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:37.202 02:10:38 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:37.202 02:10:38 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:37.202 02:10:38 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:37.202 02:10:38 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:37.202 02:10:38 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:37.202 02:10:39 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:37.202 02:10:39 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:37.202 02:10:39 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:37.202 02:10:39 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:37.202 02:10:39 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:37.202 02:10:39 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:37.202 02:10:39 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:37.202 02:10:39 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:37.202 02:10:39 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:37.202 02:10:39 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:37.202 02:10:39 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:37.202 02:10:39 -- common/autotest_common.sh@1570 -- # return 0 00:06:37.202 02:10:39 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:37.202 02:10:39 -- common/autotest_common.sh@1578 -- # return 0 00:06:37.202 02:10:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:37.202 02:10:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:37.202 02:10:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:37.202 02:10:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:37.202 02:10:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:37.202 02:10:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.202 02:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:37.202 02:10:39 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:37.202 02:10:39 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:37.202 02:10:39 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:37.202 02:10:39 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:37.202 02:10:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.202 02:10:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.202 02:10:39 -- common/autotest_common.sh@10 -- # set +x 00:06:37.202 ************************************ 00:06:37.202 START TEST env 00:06:37.202 ************************************ 00:06:37.202 02:10:39 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:37.461 * Looking for test storage... 00:06:37.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:37.461 02:10:39 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.461 02:10:39 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.461 02:10:39 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.461 02:10:39 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.461 02:10:39 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.461 02:10:39 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.461 02:10:39 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.461 02:10:39 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.461 02:10:39 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.461 02:10:39 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.461 02:10:39 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.461 02:10:39 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.461 02:10:39 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.461 02:10:39 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.461 02:10:39 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.461 02:10:39 env -- scripts/common.sh@344 -- # case "$op" in 00:06:37.461 02:10:39 env -- scripts/common.sh@345 -- # : 1 00:06:37.461 02:10:39 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.461 02:10:39 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.461 02:10:39 env -- scripts/common.sh@365 -- # decimal 1 00:06:37.461 02:10:39 env -- scripts/common.sh@353 -- # local d=1 00:06:37.462 02:10:39 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.462 02:10:39 env -- scripts/common.sh@355 -- # echo 1 00:06:37.462 02:10:39 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.462 02:10:39 env -- scripts/common.sh@366 -- # decimal 2 00:06:37.462 02:10:39 env -- scripts/common.sh@353 -- # local d=2 00:06:37.462 02:10:39 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.462 02:10:39 env -- scripts/common.sh@355 -- # echo 2 00:06:37.462 02:10:39 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.462 02:10:39 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.462 02:10:39 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.462 02:10:39 env -- scripts/common.sh@368 -- # return 0 00:06:37.462 02:10:39 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.462 02:10:39 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.462 --rc genhtml_branch_coverage=1 00:06:37.462 --rc genhtml_function_coverage=1 00:06:37.462 --rc genhtml_legend=1 00:06:37.462 --rc geninfo_all_blocks=1 00:06:37.462 --rc geninfo_unexecuted_blocks=1 00:06:37.462 00:06:37.462 ' 00:06:37.462 02:10:39 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.462 --rc genhtml_branch_coverage=1 00:06:37.462 --rc genhtml_function_coverage=1 00:06:37.462 --rc genhtml_legend=1 00:06:37.462 --rc geninfo_all_blocks=1 00:06:37.462 --rc geninfo_unexecuted_blocks=1 00:06:37.462 00:06:37.462 ' 00:06:37.462 02:10:39 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.462 --rc genhtml_branch_coverage=1 00:06:37.462 --rc genhtml_function_coverage=1 00:06:37.462 --rc genhtml_legend=1 00:06:37.462 --rc geninfo_all_blocks=1 00:06:37.462 --rc geninfo_unexecuted_blocks=1 00:06:37.462 00:06:37.462 ' 00:06:37.462 02:10:39 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.462 --rc genhtml_branch_coverage=1 00:06:37.462 --rc genhtml_function_coverage=1 00:06:37.462 --rc genhtml_legend=1 00:06:37.462 --rc geninfo_all_blocks=1 00:06:37.462 --rc geninfo_unexecuted_blocks=1 00:06:37.462 00:06:37.462 ' 00:06:37.462 02:10:39 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:37.462 02:10:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.462 02:10:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.462 02:10:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:37.462 ************************************ 00:06:37.462 START TEST env_memory 00:06:37.462 ************************************ 00:06:37.462 02:10:39 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:37.462 00:06:37.462 00:06:37.462 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.462 http://cunit.sourceforge.net/ 00:06:37.462 00:06:37.462 00:06:37.462 Suite: memory 00:06:37.462 Test: alloc and free memory map ...[2024-11-08 02:10:39.313321] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:37.462 passed 00:06:37.721 Test: mem map translation ...[2024-11-08 02:10:39.344305] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:37.721 [2024-11-08 02:10:39.344351] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:37.721 [2024-11-08 02:10:39.344413] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:37.721 [2024-11-08 02:10:39.344424] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:37.721 passed 00:06:37.721 Test: mem map registration ...[2024-11-08 02:10:39.407971] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:37.721 [2024-11-08 02:10:39.408000] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:37.722 passed 00:06:37.722 Test: mem map adjacent registrations ...passed 00:06:37.722 00:06:37.722 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.722 suites 1 1 n/a 0 0 00:06:37.722 tests 4 4 4 0 0 00:06:37.722 asserts 152 152 152 0 n/a 00:06:37.722 00:06:37.722 Elapsed time = 0.213 seconds 00:06:37.722 00:06:37.722 real 0m0.230s 00:06:37.722 user 0m0.212s 00:06:37.722 sys 0m0.014s 00:06:37.722 02:10:39 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.722 02:10:39 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:37.722 ************************************ 00:06:37.722 END TEST env_memory 00:06:37.722 ************************************ 00:06:37.722 02:10:39 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:37.722 02:10:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.722 02:10:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.722 02:10:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:37.722 ************************************ 00:06:37.722 START TEST env_vtophys 00:06:37.722 ************************************ 00:06:37.722 02:10:39 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:37.722 EAL: lib.eal log level changed from notice to debug 00:06:37.722 EAL: Detected lcore 0 as core 0 on socket 0 00:06:37.722 EAL: Detected lcore 1 as core 0 on socket 0 00:06:37.722 EAL: Detected lcore 2 as core 0 on socket 0 00:06:37.722 EAL: Detected lcore 3 as core 0 on socket 0 00:06:37.722 EAL: Detected lcore 4 as core 0 on socket 0 00:06:37.722 EAL: Detected lcore 5 as core 0 on socket 0 00:06:37.722 EAL: Detected lcore 6 as core 0 on socket 0 00:06:37.722 EAL: Detected lcore 7 as core 0 on socket 0 00:06:37.722 EAL: Detected lcore 8 as core 0 on socket 0 00:06:37.722 EAL: Detected lcore 9 as core 0 on socket 0 00:06:37.722 EAL: Maximum logical cores by configuration: 128 00:06:37.722 EAL: Detected CPU lcores: 10 00:06:37.722 EAL: Detected NUMA nodes: 1 00:06:37.722 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:37.722 EAL: Detected shared linkage of DPDK 00:06:37.722 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:37.722 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:37.722 EAL: Registered [vdev] bus. 00:06:37.722 EAL: bus.vdev log level changed from disabled to notice 00:06:37.722 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:37.722 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:37.722 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:37.722 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:37.722 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:37.722 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:37.722 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:37.722 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:37.722 EAL: No shared files mode enabled, IPC will be disabled 00:06:37.722 EAL: No shared files mode enabled, IPC is disabled 00:06:37.722 EAL: Selected IOVA mode 'PA' 00:06:37.722 EAL: Probing VFIO support... 00:06:37.722 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:37.722 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:37.722 EAL: Ask a virtual area of 0x2e000 bytes 00:06:37.722 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:37.722 EAL: Setting up physically contiguous memory... 00:06:37.722 EAL: Setting maximum number of open files to 524288 00:06:37.722 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:37.722 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:37.722 EAL: Ask a virtual area of 0x61000 bytes 00:06:37.722 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:37.722 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:37.722 EAL: Ask a virtual area of 0x400000000 bytes 00:06:37.722 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:37.722 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:37.722 EAL: Ask a virtual area of 0x61000 bytes 00:06:37.722 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:37.722 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:37.722 EAL: Ask a virtual area of 0x400000000 bytes 00:06:37.722 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:37.722 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:37.722 EAL: Ask a virtual area of 0x61000 bytes 00:06:37.722 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:37.722 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:37.722 EAL: Ask a virtual area of 0x400000000 bytes 00:06:37.722 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:37.722 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:37.722 EAL: Ask a virtual area of 0x61000 bytes 00:06:37.722 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:37.722 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:37.722 EAL: Ask a virtual area of 0x400000000 bytes 00:06:37.722 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:37.722 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:37.722 EAL: Hugepages will be freed exactly as allocated. 00:06:37.722 EAL: No shared files mode enabled, IPC is disabled 00:06:37.722 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: TSC frequency is ~2200000 KHz 00:06:37.980 EAL: Main lcore 0 is ready (tid=7fa84b09da00;cpuset=[0]) 00:06:37.980 EAL: Trying to obtain current memory policy. 00:06:37.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.980 EAL: Restoring previous memory policy: 0 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was expanded by 2MB 00:06:37.980 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:37.980 EAL: Mem event callback 'spdk:(nil)' registered 00:06:37.980 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:37.980 00:06:37.980 00:06:37.980 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.980 http://cunit.sourceforge.net/ 00:06:37.980 00:06:37.980 00:06:37.980 Suite: components_suite 00:06:37.980 Test: vtophys_malloc_test ...passed 00:06:37.980 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:37.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.980 EAL: Restoring previous memory policy: 4 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was expanded by 4MB 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was shrunk by 4MB 00:06:37.980 EAL: Trying to obtain current memory policy. 00:06:37.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.980 EAL: Restoring previous memory policy: 4 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was expanded by 6MB 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was shrunk by 6MB 00:06:37.980 EAL: Trying to obtain current memory policy. 00:06:37.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.980 EAL: Restoring previous memory policy: 4 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was expanded by 10MB 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was shrunk by 10MB 00:06:37.980 EAL: Trying to obtain current memory policy. 00:06:37.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.980 EAL: Restoring previous memory policy: 4 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was expanded by 18MB 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was shrunk by 18MB 00:06:37.980 EAL: Trying to obtain current memory policy. 00:06:37.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.980 EAL: Restoring previous memory policy: 4 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was expanded by 34MB 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was shrunk by 34MB 00:06:37.980 EAL: Trying to obtain current memory policy. 00:06:37.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.980 EAL: Restoring previous memory policy: 4 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was expanded by 66MB 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was shrunk by 66MB 00:06:37.980 EAL: Trying to obtain current memory policy. 00:06:37.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.980 EAL: Restoring previous memory policy: 4 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was expanded by 130MB 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was shrunk by 130MB 00:06:37.980 EAL: Trying to obtain current memory policy. 00:06:37.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:37.980 EAL: Restoring previous memory policy: 4 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.980 EAL: request: mp_malloc_sync 00:06:37.980 EAL: No shared files mode enabled, IPC is disabled 00:06:37.980 EAL: Heap on socket 0 was expanded by 258MB 00:06:37.980 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.239 EAL: request: mp_malloc_sync 00:06:38.239 EAL: No shared files mode enabled, IPC is disabled 00:06:38.239 EAL: Heap on socket 0 was shrunk by 258MB 00:06:38.239 EAL: Trying to obtain current memory policy. 00:06:38.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.239 EAL: Restoring previous memory policy: 4 00:06:38.239 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.239 EAL: request: mp_malloc_sync 00:06:38.239 EAL: No shared files mode enabled, IPC is disabled 00:06:38.239 EAL: Heap on socket 0 was expanded by 514MB 00:06:38.239 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.239 EAL: request: mp_malloc_sync 00:06:38.239 EAL: No shared files mode enabled, IPC is disabled 00:06:38.239 EAL: Heap on socket 0 was shrunk by 514MB 00:06:38.239 EAL: Trying to obtain current memory policy. 00:06:38.239 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:38.498 EAL: Restoring previous memory policy: 4 00:06:38.498 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.498 EAL: request: mp_malloc_sync 00:06:38.498 EAL: No shared files mode enabled, IPC is disabled 00:06:38.498 EAL: Heap on socket 0 was expanded by 1026MB 00:06:38.498 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.757 EAL: request: mp_malloc_sync 00:06:38.757 passed 00:06:38.757 00:06:38.757 EAL: No shared files mode enabled, IPC is disabled 00:06:38.757 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:38.757 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.757 suites 1 1 n/a 0 0 00:06:38.757 tests 2 2 2 0 0 00:06:38.757 asserts 5792 5792 5792 0 n/a 00:06:38.757 00:06:38.757 Elapsed time = 0.674 seconds 00:06:38.757 EAL: Calling mem event callback 'spdk:(nil)' 00:06:38.757 EAL: request: mp_malloc_sync 00:06:38.757 EAL: No shared files mode enabled, IPC is disabled 00:06:38.757 EAL: Heap on socket 0 was shrunk by 2MB 00:06:38.757 EAL: No shared files mode enabled, IPC is disabled 00:06:38.757 EAL: No shared files mode enabled, IPC is disabled 00:06:38.757 EAL: No shared files mode enabled, IPC is disabled 00:06:38.757 00:06:38.757 real 0m0.869s 00:06:38.757 user 0m0.430s 00:06:38.757 sys 0m0.308s 00:06:38.757 02:10:40 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.757 02:10:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:38.757 ************************************ 00:06:38.757 END TEST env_vtophys 00:06:38.757 ************************************ 00:06:38.757 02:10:40 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:38.757 02:10:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.757 02:10:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.757 02:10:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:38.757 ************************************ 00:06:38.757 START TEST env_pci 00:06:38.757 ************************************ 00:06:38.757 02:10:40 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:38.757 00:06:38.757 00:06:38.757 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.757 http://cunit.sourceforge.net/ 00:06:38.757 00:06:38.757 00:06:38.757 Suite: pci 00:06:38.757 Test: pci_hook ...[2024-11-08 02:10:40.480941] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69794 has claimed it 00:06:38.757 passed 00:06:38.757 00:06:38.757 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.757 suites 1 1 n/a 0 0 00:06:38.757 tests 1 1 1 0 0 00:06:38.757 asserts 25 25 25 0 n/a 00:06:38.757 00:06:38.757 Elapsed time = 0.002 seconds 00:06:38.757 EAL: Cannot find device (10000:00:01.0) 00:06:38.757 EAL: Failed to attach device on primary process 00:06:38.757 00:06:38.757 real 0m0.018s 00:06:38.757 user 0m0.010s 00:06:38.757 sys 0m0.007s 00:06:38.757 02:10:40 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.757 02:10:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:38.757 ************************************ 00:06:38.757 END TEST env_pci 00:06:38.757 ************************************ 00:06:38.757 02:10:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:38.757 02:10:40 env -- env/env.sh@15 -- # uname 00:06:38.757 02:10:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:38.757 02:10:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:38.757 02:10:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:38.757 02:10:40 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:38.757 02:10:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.757 02:10:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:38.757 ************************************ 00:06:38.757 START TEST env_dpdk_post_init 00:06:38.757 ************************************ 00:06:38.757 02:10:40 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:38.757 EAL: Detected CPU lcores: 10 00:06:38.757 EAL: Detected NUMA nodes: 1 00:06:38.758 EAL: Detected shared linkage of DPDK 00:06:38.758 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:38.758 EAL: Selected IOVA mode 'PA' 00:06:39.017 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:39.017 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:39.017 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:39.017 Starting DPDK initialization... 00:06:39.017 Starting SPDK post initialization... 00:06:39.017 SPDK NVMe probe 00:06:39.017 Attaching to 0000:00:10.0 00:06:39.017 Attaching to 0000:00:11.0 00:06:39.017 Attached to 0000:00:10.0 00:06:39.017 Attached to 0000:00:11.0 00:06:39.017 Cleaning up... 00:06:39.017 00:06:39.017 real 0m0.175s 00:06:39.017 user 0m0.048s 00:06:39.017 sys 0m0.027s 00:06:39.017 02:10:40 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.017 02:10:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:39.017 ************************************ 00:06:39.017 END TEST env_dpdk_post_init 00:06:39.017 ************************************ 00:06:39.017 02:10:40 env -- env/env.sh@26 -- # uname 00:06:39.017 02:10:40 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:39.017 02:10:40 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:39.017 02:10:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.017 02:10:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.017 02:10:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:39.017 ************************************ 00:06:39.017 START TEST env_mem_callbacks 00:06:39.017 ************************************ 00:06:39.017 02:10:40 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:39.017 EAL: Detected CPU lcores: 10 00:06:39.017 EAL: Detected NUMA nodes: 1 00:06:39.017 EAL: Detected shared linkage of DPDK 00:06:39.017 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:39.017 EAL: Selected IOVA mode 'PA' 00:06:39.276 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:39.276 00:06:39.276 00:06:39.276 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.276 http://cunit.sourceforge.net/ 00:06:39.276 00:06:39.276 00:06:39.276 Suite: memory 00:06:39.276 Test: test ... 00:06:39.276 register 0x200000200000 2097152 00:06:39.276 malloc 3145728 00:06:39.276 register 0x200000400000 4194304 00:06:39.276 buf 0x200000500000 len 3145728 PASSED 00:06:39.276 malloc 64 00:06:39.276 buf 0x2000004fff40 len 64 PASSED 00:06:39.276 malloc 4194304 00:06:39.276 register 0x200000800000 6291456 00:06:39.276 buf 0x200000a00000 len 4194304 PASSED 00:06:39.276 free 0x200000500000 3145728 00:06:39.276 free 0x2000004fff40 64 00:06:39.276 unregister 0x200000400000 4194304 PASSED 00:06:39.276 free 0x200000a00000 4194304 00:06:39.276 unregister 0x200000800000 6291456 PASSED 00:06:39.276 malloc 8388608 00:06:39.276 register 0x200000400000 10485760 00:06:39.276 buf 0x200000600000 len 8388608 PASSED 00:06:39.276 free 0x200000600000 8388608 00:06:39.276 unregister 0x200000400000 10485760 PASSED 00:06:39.276 passed 00:06:39.276 00:06:39.276 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.276 suites 1 1 n/a 0 0 00:06:39.276 tests 1 1 1 0 0 00:06:39.276 asserts 15 15 15 0 n/a 00:06:39.276 00:06:39.276 Elapsed time = 0.009 seconds 00:06:39.276 00:06:39.276 real 0m0.143s 00:06:39.276 user 0m0.018s 00:06:39.276 sys 0m0.024s 00:06:39.276 02:10:40 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.276 02:10:40 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:39.276 ************************************ 00:06:39.276 END TEST env_mem_callbacks 00:06:39.276 ************************************ 00:06:39.276 00:06:39.276 real 0m1.896s 00:06:39.276 user 0m0.928s 00:06:39.276 sys 0m0.616s 00:06:39.276 02:10:40 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.276 02:10:40 env -- common/autotest_common.sh@10 -- # set +x 00:06:39.276 ************************************ 00:06:39.276 END TEST env 00:06:39.276 ************************************ 00:06:39.276 02:10:40 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:39.276 02:10:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.276 02:10:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.276 02:10:41 -- common/autotest_common.sh@10 -- # set +x 00:06:39.276 ************************************ 00:06:39.276 START TEST rpc 00:06:39.276 ************************************ 00:06:39.276 02:10:41 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:39.276 * Looking for test storage... 00:06:39.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:39.276 02:10:41 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.276 02:10:41 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.276 02:10:41 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.536 02:10:41 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.536 02:10:41 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.536 02:10:41 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.536 02:10:41 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.536 02:10:41 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.536 02:10:41 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.536 02:10:41 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.536 02:10:41 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.536 02:10:41 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.536 02:10:41 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.536 02:10:41 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.536 02:10:41 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.536 02:10:41 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:39.536 02:10:41 rpc -- scripts/common.sh@345 -- # : 1 00:06:39.536 02:10:41 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.536 02:10:41 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.536 02:10:41 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:39.536 02:10:41 rpc -- scripts/common.sh@353 -- # local d=1 00:06:39.536 02:10:41 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.536 02:10:41 rpc -- scripts/common.sh@355 -- # echo 1 00:06:39.536 02:10:41 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.536 02:10:41 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:39.536 02:10:41 rpc -- scripts/common.sh@353 -- # local d=2 00:06:39.536 02:10:41 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.536 02:10:41 rpc -- scripts/common.sh@355 -- # echo 2 00:06:39.536 02:10:41 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.536 02:10:41 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.536 02:10:41 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.536 02:10:41 rpc -- scripts/common.sh@368 -- # return 0 00:06:39.536 02:10:41 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.536 02:10:41 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.536 --rc genhtml_branch_coverage=1 00:06:39.536 --rc genhtml_function_coverage=1 00:06:39.536 --rc genhtml_legend=1 00:06:39.536 --rc geninfo_all_blocks=1 00:06:39.536 --rc geninfo_unexecuted_blocks=1 00:06:39.536 00:06:39.536 ' 00:06:39.536 02:10:41 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.536 --rc genhtml_branch_coverage=1 00:06:39.536 --rc genhtml_function_coverage=1 00:06:39.536 --rc genhtml_legend=1 00:06:39.536 --rc geninfo_all_blocks=1 00:06:39.536 --rc geninfo_unexecuted_blocks=1 00:06:39.536 00:06:39.536 ' 00:06:39.536 02:10:41 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.536 --rc genhtml_branch_coverage=1 00:06:39.536 --rc genhtml_function_coverage=1 00:06:39.536 --rc genhtml_legend=1 00:06:39.536 --rc geninfo_all_blocks=1 00:06:39.536 --rc geninfo_unexecuted_blocks=1 00:06:39.536 00:06:39.536 ' 00:06:39.536 02:10:41 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.536 --rc genhtml_branch_coverage=1 00:06:39.536 --rc genhtml_function_coverage=1 00:06:39.536 --rc genhtml_legend=1 00:06:39.536 --rc geninfo_all_blocks=1 00:06:39.536 --rc geninfo_unexecuted_blocks=1 00:06:39.536 00:06:39.536 ' 00:06:39.536 02:10:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69912 00:06:39.536 02:10:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.536 02:10:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69912 00:06:39.536 02:10:41 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:39.536 02:10:41 rpc -- common/autotest_common.sh@831 -- # '[' -z 69912 ']' 00:06:39.536 02:10:41 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.536 02:10:41 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.536 02:10:41 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.536 02:10:41 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.536 02:10:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.536 [2024-11-08 02:10:41.273470] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:39.536 [2024-11-08 02:10:41.273581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69912 ] 00:06:39.536 [2024-11-08 02:10:41.413099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.795 [2024-11-08 02:10:41.455880] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:39.795 [2024-11-08 02:10:41.455934] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69912' to capture a snapshot of events at runtime. 00:06:39.795 [2024-11-08 02:10:41.455948] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.795 [2024-11-08 02:10:41.455958] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.795 [2024-11-08 02:10:41.455966] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69912 for offline analysis/debug. 00:06:39.795 [2024-11-08 02:10:41.455999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.795 [2024-11-08 02:10:41.496019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.795 02:10:41 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.795 02:10:41 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:39.795 02:10:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:39.795 02:10:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:39.795 02:10:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:39.795 02:10:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:39.795 02:10:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.795 02:10:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.795 02:10:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.795 ************************************ 00:06:39.795 START TEST rpc_integrity 00:06:39.795 ************************************ 00:06:39.795 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:39.795 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:39.795 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.795 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:39.795 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.795 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:39.795 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:40.055 { 00:06:40.055 "name": "Malloc0", 00:06:40.055 "aliases": [ 00:06:40.055 "21b2e137-18b7-4a4e-a56c-516cb8c90dce" 00:06:40.055 ], 00:06:40.055 "product_name": "Malloc disk", 00:06:40.055 "block_size": 512, 00:06:40.055 "num_blocks": 16384, 00:06:40.055 "uuid": "21b2e137-18b7-4a4e-a56c-516cb8c90dce", 00:06:40.055 "assigned_rate_limits": { 00:06:40.055 "rw_ios_per_sec": 0, 00:06:40.055 "rw_mbytes_per_sec": 0, 00:06:40.055 "r_mbytes_per_sec": 0, 00:06:40.055 "w_mbytes_per_sec": 0 00:06:40.055 }, 00:06:40.055 "claimed": false, 00:06:40.055 "zoned": false, 00:06:40.055 "supported_io_types": { 00:06:40.055 "read": true, 00:06:40.055 "write": true, 00:06:40.055 "unmap": true, 00:06:40.055 "flush": true, 00:06:40.055 "reset": true, 00:06:40.055 "nvme_admin": false, 00:06:40.055 "nvme_io": false, 00:06:40.055 "nvme_io_md": false, 00:06:40.055 "write_zeroes": true, 00:06:40.055 "zcopy": true, 00:06:40.055 "get_zone_info": false, 00:06:40.055 "zone_management": false, 00:06:40.055 "zone_append": false, 00:06:40.055 "compare": false, 00:06:40.055 "compare_and_write": false, 00:06:40.055 "abort": true, 00:06:40.055 "seek_hole": false, 00:06:40.055 "seek_data": false, 00:06:40.055 "copy": true, 00:06:40.055 "nvme_iov_md": false 00:06:40.055 }, 00:06:40.055 "memory_domains": [ 00:06:40.055 { 00:06:40.055 "dma_device_id": "system", 00:06:40.055 "dma_device_type": 1 00:06:40.055 }, 00:06:40.055 { 00:06:40.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.055 "dma_device_type": 2 00:06:40.055 } 00:06:40.055 ], 00:06:40.055 "driver_specific": {} 00:06:40.055 } 00:06:40.055 ]' 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 [2024-11-08 02:10:41.791666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:40.055 [2024-11-08 02:10:41.791718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.055 [2024-11-08 02:10:41.791736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14c5030 00:06:40.055 [2024-11-08 02:10:41.791745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.055 [2024-11-08 02:10:41.793170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.055 [2024-11-08 02:10:41.793211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:40.055 Passthru0 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:40.055 { 00:06:40.055 "name": "Malloc0", 00:06:40.055 "aliases": [ 00:06:40.055 "21b2e137-18b7-4a4e-a56c-516cb8c90dce" 00:06:40.055 ], 00:06:40.055 "product_name": "Malloc disk", 00:06:40.055 "block_size": 512, 00:06:40.055 "num_blocks": 16384, 00:06:40.055 "uuid": "21b2e137-18b7-4a4e-a56c-516cb8c90dce", 00:06:40.055 "assigned_rate_limits": { 00:06:40.055 "rw_ios_per_sec": 0, 00:06:40.055 "rw_mbytes_per_sec": 0, 00:06:40.055 "r_mbytes_per_sec": 0, 00:06:40.055 "w_mbytes_per_sec": 0 00:06:40.055 }, 00:06:40.055 "claimed": true, 00:06:40.055 "claim_type": "exclusive_write", 00:06:40.055 "zoned": false, 00:06:40.055 "supported_io_types": { 00:06:40.055 "read": true, 00:06:40.055 "write": true, 00:06:40.055 "unmap": true, 00:06:40.055 "flush": true, 00:06:40.055 "reset": true, 00:06:40.055 "nvme_admin": false, 00:06:40.055 "nvme_io": false, 00:06:40.055 "nvme_io_md": false, 00:06:40.055 "write_zeroes": true, 00:06:40.055 "zcopy": true, 00:06:40.055 "get_zone_info": false, 00:06:40.055 "zone_management": false, 00:06:40.055 "zone_append": false, 00:06:40.055 "compare": false, 00:06:40.055 "compare_and_write": false, 00:06:40.055 "abort": true, 00:06:40.055 "seek_hole": false, 00:06:40.055 "seek_data": false, 00:06:40.055 "copy": true, 00:06:40.055 "nvme_iov_md": false 00:06:40.055 }, 00:06:40.055 "memory_domains": [ 00:06:40.055 { 00:06:40.055 "dma_device_id": "system", 00:06:40.055 "dma_device_type": 1 00:06:40.055 }, 00:06:40.055 { 00:06:40.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.055 "dma_device_type": 2 00:06:40.055 } 00:06:40.055 ], 00:06:40.055 "driver_specific": {} 00:06:40.055 }, 00:06:40.055 { 00:06:40.055 "name": "Passthru0", 00:06:40.055 "aliases": [ 00:06:40.055 "2bae34ce-5cf8-58d0-a11b-a3c0d9883550" 00:06:40.055 ], 00:06:40.055 "product_name": "passthru", 00:06:40.055 "block_size": 512, 00:06:40.055 "num_blocks": 16384, 00:06:40.055 "uuid": "2bae34ce-5cf8-58d0-a11b-a3c0d9883550", 00:06:40.055 "assigned_rate_limits": { 00:06:40.055 "rw_ios_per_sec": 0, 00:06:40.055 "rw_mbytes_per_sec": 0, 00:06:40.055 "r_mbytes_per_sec": 0, 00:06:40.055 "w_mbytes_per_sec": 0 00:06:40.055 }, 00:06:40.055 "claimed": false, 00:06:40.055 "zoned": false, 00:06:40.055 "supported_io_types": { 00:06:40.055 "read": true, 00:06:40.055 "write": true, 00:06:40.055 "unmap": true, 00:06:40.055 "flush": true, 00:06:40.055 "reset": true, 00:06:40.055 "nvme_admin": false, 00:06:40.055 "nvme_io": false, 00:06:40.055 "nvme_io_md": false, 00:06:40.055 "write_zeroes": true, 00:06:40.055 "zcopy": true, 00:06:40.055 "get_zone_info": false, 00:06:40.055 "zone_management": false, 00:06:40.055 "zone_append": false, 00:06:40.055 "compare": false, 00:06:40.055 "compare_and_write": false, 00:06:40.055 "abort": true, 00:06:40.055 "seek_hole": false, 00:06:40.055 "seek_data": false, 00:06:40.055 "copy": true, 00:06:40.055 "nvme_iov_md": false 00:06:40.055 }, 00:06:40.055 "memory_domains": [ 00:06:40.055 { 00:06:40.055 "dma_device_id": "system", 00:06:40.055 "dma_device_type": 1 00:06:40.055 }, 00:06:40.055 { 00:06:40.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.055 "dma_device_type": 2 00:06:40.055 } 00:06:40.055 ], 00:06:40.055 "driver_specific": { 00:06:40.055 "passthru": { 00:06:40.055 "name": "Passthru0", 00:06:40.055 "base_bdev_name": "Malloc0" 00:06:40.055 } 00:06:40.055 } 00:06:40.055 } 00:06:40.055 ]' 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.055 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:40.055 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:40.314 ************************************ 00:06:40.314 END TEST rpc_integrity 00:06:40.314 ************************************ 00:06:40.314 02:10:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:40.314 00:06:40.314 real 0m0.327s 00:06:40.314 user 0m0.219s 00:06:40.314 sys 0m0.042s 00:06:40.314 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.314 02:10:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.314 02:10:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:40.314 02:10:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.314 02:10:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.314 02:10:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.314 ************************************ 00:06:40.314 START TEST rpc_plugins 00:06:40.314 ************************************ 00:06:40.314 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:40.314 02:10:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:40.314 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.314 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.314 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.314 02:10:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:40.314 02:10:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:40.314 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.314 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.314 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.314 02:10:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:40.314 { 00:06:40.314 "name": "Malloc1", 00:06:40.314 "aliases": [ 00:06:40.314 "1c38ce43-b43c-46ff-86cb-f394f1f01051" 00:06:40.314 ], 00:06:40.314 "product_name": "Malloc disk", 00:06:40.314 "block_size": 4096, 00:06:40.314 "num_blocks": 256, 00:06:40.314 "uuid": "1c38ce43-b43c-46ff-86cb-f394f1f01051", 00:06:40.314 "assigned_rate_limits": { 00:06:40.314 "rw_ios_per_sec": 0, 00:06:40.314 "rw_mbytes_per_sec": 0, 00:06:40.314 "r_mbytes_per_sec": 0, 00:06:40.314 "w_mbytes_per_sec": 0 00:06:40.314 }, 00:06:40.314 "claimed": false, 00:06:40.314 "zoned": false, 00:06:40.314 "supported_io_types": { 00:06:40.314 "read": true, 00:06:40.314 "write": true, 00:06:40.314 "unmap": true, 00:06:40.314 "flush": true, 00:06:40.314 "reset": true, 00:06:40.314 "nvme_admin": false, 00:06:40.314 "nvme_io": false, 00:06:40.314 "nvme_io_md": false, 00:06:40.314 "write_zeroes": true, 00:06:40.314 "zcopy": true, 00:06:40.314 "get_zone_info": false, 00:06:40.314 "zone_management": false, 00:06:40.314 "zone_append": false, 00:06:40.314 "compare": false, 00:06:40.314 "compare_and_write": false, 00:06:40.314 "abort": true, 00:06:40.314 "seek_hole": false, 00:06:40.314 "seek_data": false, 00:06:40.314 "copy": true, 00:06:40.314 "nvme_iov_md": false 00:06:40.314 }, 00:06:40.314 "memory_domains": [ 00:06:40.314 { 00:06:40.314 "dma_device_id": "system", 00:06:40.314 "dma_device_type": 1 00:06:40.314 }, 00:06:40.314 { 00:06:40.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.314 "dma_device_type": 2 00:06:40.314 } 00:06:40.314 ], 00:06:40.314 "driver_specific": {} 00:06:40.314 } 00:06:40.314 ]' 00:06:40.314 02:10:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:40.314 02:10:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:40.314 02:10:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:40.314 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.314 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.314 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.314 02:10:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:40.314 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.315 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.315 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.315 02:10:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:40.315 02:10:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:40.315 ************************************ 00:06:40.315 END TEST rpc_plugins 00:06:40.315 ************************************ 00:06:40.315 02:10:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:40.315 00:06:40.315 real 0m0.166s 00:06:40.315 user 0m0.106s 00:06:40.315 sys 0m0.019s 00:06:40.315 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.315 02:10:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.573 02:10:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:40.573 02:10:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.573 02:10:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.573 02:10:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.573 ************************************ 00:06:40.573 START TEST rpc_trace_cmd_test 00:06:40.573 ************************************ 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:40.573 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69912", 00:06:40.573 "tpoint_group_mask": "0x8", 00:06:40.573 "iscsi_conn": { 00:06:40.573 "mask": "0x2", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "scsi": { 00:06:40.573 "mask": "0x4", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "bdev": { 00:06:40.573 "mask": "0x8", 00:06:40.573 "tpoint_mask": "0xffffffffffffffff" 00:06:40.573 }, 00:06:40.573 "nvmf_rdma": { 00:06:40.573 "mask": "0x10", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "nvmf_tcp": { 00:06:40.573 "mask": "0x20", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "ftl": { 00:06:40.573 "mask": "0x40", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "blobfs": { 00:06:40.573 "mask": "0x80", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "dsa": { 00:06:40.573 "mask": "0x200", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "thread": { 00:06:40.573 "mask": "0x400", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "nvme_pcie": { 00:06:40.573 "mask": "0x800", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "iaa": { 00:06:40.573 "mask": "0x1000", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "nvme_tcp": { 00:06:40.573 "mask": "0x2000", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "bdev_nvme": { 00:06:40.573 "mask": "0x4000", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "sock": { 00:06:40.573 "mask": "0x8000", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "blob": { 00:06:40.573 "mask": "0x10000", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 }, 00:06:40.573 "bdev_raid": { 00:06:40.573 "mask": "0x20000", 00:06:40.573 "tpoint_mask": "0x0" 00:06:40.573 } 00:06:40.573 }' 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:40.573 02:10:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:40.832 ************************************ 00:06:40.832 END TEST rpc_trace_cmd_test 00:06:40.832 ************************************ 00:06:40.832 02:10:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:40.832 00:06:40.832 real 0m0.256s 00:06:40.832 user 0m0.224s 00:06:40.832 sys 0m0.019s 00:06:40.832 02:10:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.832 02:10:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.832 02:10:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:40.832 02:10:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:40.832 02:10:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:40.832 02:10:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.832 02:10:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.832 02:10:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.832 ************************************ 00:06:40.832 START TEST rpc_daemon_integrity 00:06:40.832 ************************************ 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:40.832 { 00:06:40.832 "name": "Malloc2", 00:06:40.832 "aliases": [ 00:06:40.832 "4644c596-12ea-4268-a732-489e6014f0f9" 00:06:40.832 ], 00:06:40.832 "product_name": "Malloc disk", 00:06:40.832 "block_size": 512, 00:06:40.832 "num_blocks": 16384, 00:06:40.832 "uuid": "4644c596-12ea-4268-a732-489e6014f0f9", 00:06:40.832 "assigned_rate_limits": { 00:06:40.832 "rw_ios_per_sec": 0, 00:06:40.832 "rw_mbytes_per_sec": 0, 00:06:40.832 "r_mbytes_per_sec": 0, 00:06:40.832 "w_mbytes_per_sec": 0 00:06:40.832 }, 00:06:40.832 "claimed": false, 00:06:40.832 "zoned": false, 00:06:40.832 "supported_io_types": { 00:06:40.832 "read": true, 00:06:40.832 "write": true, 00:06:40.832 "unmap": true, 00:06:40.832 "flush": true, 00:06:40.832 "reset": true, 00:06:40.832 "nvme_admin": false, 00:06:40.832 "nvme_io": false, 00:06:40.832 "nvme_io_md": false, 00:06:40.832 "write_zeroes": true, 00:06:40.832 "zcopy": true, 00:06:40.832 "get_zone_info": false, 00:06:40.832 "zone_management": false, 00:06:40.832 "zone_append": false, 00:06:40.832 "compare": false, 00:06:40.832 "compare_and_write": false, 00:06:40.832 "abort": true, 00:06:40.832 "seek_hole": false, 00:06:40.832 "seek_data": false, 00:06:40.832 "copy": true, 00:06:40.832 "nvme_iov_md": false 00:06:40.832 }, 00:06:40.832 "memory_domains": [ 00:06:40.832 { 00:06:40.832 "dma_device_id": "system", 00:06:40.832 "dma_device_type": 1 00:06:40.832 }, 00:06:40.832 { 00:06:40.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.832 "dma_device_type": 2 00:06:40.832 } 00:06:40.832 ], 00:06:40.832 "driver_specific": {} 00:06:40.832 } 00:06:40.832 ]' 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.832 [2024-11-08 02:10:42.680008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:40.832 [2024-11-08 02:10:42.680061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.832 [2024-11-08 02:10:42.680077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14c7ce0 00:06:40.832 [2024-11-08 02:10:42.680084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.832 [2024-11-08 02:10:42.681402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.832 [2024-11-08 02:10:42.681441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:40.832 Passthru0 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.832 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:41.092 { 00:06:41.092 "name": "Malloc2", 00:06:41.092 "aliases": [ 00:06:41.092 "4644c596-12ea-4268-a732-489e6014f0f9" 00:06:41.092 ], 00:06:41.092 "product_name": "Malloc disk", 00:06:41.092 "block_size": 512, 00:06:41.092 "num_blocks": 16384, 00:06:41.092 "uuid": "4644c596-12ea-4268-a732-489e6014f0f9", 00:06:41.092 "assigned_rate_limits": { 00:06:41.092 "rw_ios_per_sec": 0, 00:06:41.092 "rw_mbytes_per_sec": 0, 00:06:41.092 "r_mbytes_per_sec": 0, 00:06:41.092 "w_mbytes_per_sec": 0 00:06:41.092 }, 00:06:41.092 "claimed": true, 00:06:41.092 "claim_type": "exclusive_write", 00:06:41.092 "zoned": false, 00:06:41.092 "supported_io_types": { 00:06:41.092 "read": true, 00:06:41.092 "write": true, 00:06:41.092 "unmap": true, 00:06:41.092 "flush": true, 00:06:41.092 "reset": true, 00:06:41.092 "nvme_admin": false, 00:06:41.092 "nvme_io": false, 00:06:41.092 "nvme_io_md": false, 00:06:41.092 "write_zeroes": true, 00:06:41.092 "zcopy": true, 00:06:41.092 "get_zone_info": false, 00:06:41.092 "zone_management": false, 00:06:41.092 "zone_append": false, 00:06:41.092 "compare": false, 00:06:41.092 "compare_and_write": false, 00:06:41.092 "abort": true, 00:06:41.092 "seek_hole": false, 00:06:41.092 "seek_data": false, 00:06:41.092 "copy": true, 00:06:41.092 "nvme_iov_md": false 00:06:41.092 }, 00:06:41.092 "memory_domains": [ 00:06:41.092 { 00:06:41.092 "dma_device_id": "system", 00:06:41.092 "dma_device_type": 1 00:06:41.092 }, 00:06:41.092 { 00:06:41.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.092 "dma_device_type": 2 00:06:41.092 } 00:06:41.092 ], 00:06:41.092 "driver_specific": {} 00:06:41.092 }, 00:06:41.092 { 00:06:41.092 "name": "Passthru0", 00:06:41.092 "aliases": [ 00:06:41.092 "23ffafb6-e436-5530-bf21-ca9e05d50faa" 00:06:41.092 ], 00:06:41.092 "product_name": "passthru", 00:06:41.092 "block_size": 512, 00:06:41.092 "num_blocks": 16384, 00:06:41.092 "uuid": "23ffafb6-e436-5530-bf21-ca9e05d50faa", 00:06:41.092 "assigned_rate_limits": { 00:06:41.092 "rw_ios_per_sec": 0, 00:06:41.092 "rw_mbytes_per_sec": 0, 00:06:41.092 "r_mbytes_per_sec": 0, 00:06:41.092 "w_mbytes_per_sec": 0 00:06:41.092 }, 00:06:41.092 "claimed": false, 00:06:41.092 "zoned": false, 00:06:41.092 "supported_io_types": { 00:06:41.092 "read": true, 00:06:41.092 "write": true, 00:06:41.092 "unmap": true, 00:06:41.092 "flush": true, 00:06:41.092 "reset": true, 00:06:41.092 "nvme_admin": false, 00:06:41.092 "nvme_io": false, 00:06:41.092 "nvme_io_md": false, 00:06:41.092 "write_zeroes": true, 00:06:41.092 "zcopy": true, 00:06:41.092 "get_zone_info": false, 00:06:41.092 "zone_management": false, 00:06:41.092 "zone_append": false, 00:06:41.092 "compare": false, 00:06:41.092 "compare_and_write": false, 00:06:41.092 "abort": true, 00:06:41.092 "seek_hole": false, 00:06:41.092 "seek_data": false, 00:06:41.092 "copy": true, 00:06:41.092 "nvme_iov_md": false 00:06:41.092 }, 00:06:41.092 "memory_domains": [ 00:06:41.092 { 00:06:41.092 "dma_device_id": "system", 00:06:41.092 "dma_device_type": 1 00:06:41.092 }, 00:06:41.092 { 00:06:41.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.092 "dma_device_type": 2 00:06:41.092 } 00:06:41.092 ], 00:06:41.092 "driver_specific": { 00:06:41.092 "passthru": { 00:06:41.092 "name": "Passthru0", 00:06:41.092 "base_bdev_name": "Malloc2" 00:06:41.092 } 00:06:41.092 } 00:06:41.092 } 00:06:41.092 ]' 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:41.092 ************************************ 00:06:41.092 END TEST rpc_daemon_integrity 00:06:41.092 ************************************ 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:41.092 00:06:41.092 real 0m0.316s 00:06:41.092 user 0m0.205s 00:06:41.092 sys 0m0.039s 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.092 02:10:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.092 02:10:42 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:41.092 02:10:42 rpc -- rpc/rpc.sh@84 -- # killprocess 69912 00:06:41.092 02:10:42 rpc -- common/autotest_common.sh@950 -- # '[' -z 69912 ']' 00:06:41.092 02:10:42 rpc -- common/autotest_common.sh@954 -- # kill -0 69912 00:06:41.092 02:10:42 rpc -- common/autotest_common.sh@955 -- # uname 00:06:41.092 02:10:42 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.092 02:10:42 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69912 00:06:41.092 killing process with pid 69912 00:06:41.092 02:10:42 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.092 02:10:42 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.092 02:10:42 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69912' 00:06:41.092 02:10:42 rpc -- common/autotest_common.sh@969 -- # kill 69912 00:06:41.092 02:10:42 rpc -- common/autotest_common.sh@974 -- # wait 69912 00:06:41.351 00:06:41.351 real 0m2.147s 00:06:41.351 user 0m2.878s 00:06:41.351 sys 0m0.549s 00:06:41.351 02:10:43 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.351 ************************************ 00:06:41.351 END TEST rpc 00:06:41.351 02:10:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.351 ************************************ 00:06:41.351 02:10:43 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:41.351 02:10:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.351 02:10:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.351 02:10:43 -- common/autotest_common.sh@10 -- # set +x 00:06:41.351 ************************************ 00:06:41.351 START TEST skip_rpc 00:06:41.351 ************************************ 00:06:41.351 02:10:43 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:41.610 * Looking for test storage... 00:06:41.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:41.610 02:10:43 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.610 02:10:43 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.610 02:10:43 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.610 02:10:43 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.610 02:10:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:41.611 02:10:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.611 02:10:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:41.611 02:10:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:41.611 02:10:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.611 02:10:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:41.611 02:10:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.611 02:10:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.611 02:10:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.611 02:10:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:41.611 02:10:43 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.611 02:10:43 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.611 --rc genhtml_branch_coverage=1 00:06:41.611 --rc genhtml_function_coverage=1 00:06:41.611 --rc genhtml_legend=1 00:06:41.611 --rc geninfo_all_blocks=1 00:06:41.611 --rc geninfo_unexecuted_blocks=1 00:06:41.611 00:06:41.611 ' 00:06:41.611 02:10:43 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.611 --rc genhtml_branch_coverage=1 00:06:41.611 --rc genhtml_function_coverage=1 00:06:41.611 --rc genhtml_legend=1 00:06:41.611 --rc geninfo_all_blocks=1 00:06:41.611 --rc geninfo_unexecuted_blocks=1 00:06:41.611 00:06:41.611 ' 00:06:41.611 02:10:43 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.611 --rc genhtml_branch_coverage=1 00:06:41.611 --rc genhtml_function_coverage=1 00:06:41.611 --rc genhtml_legend=1 00:06:41.611 --rc geninfo_all_blocks=1 00:06:41.611 --rc geninfo_unexecuted_blocks=1 00:06:41.611 00:06:41.611 ' 00:06:41.611 02:10:43 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.611 --rc genhtml_branch_coverage=1 00:06:41.611 --rc genhtml_function_coverage=1 00:06:41.611 --rc genhtml_legend=1 00:06:41.611 --rc geninfo_all_blocks=1 00:06:41.611 --rc geninfo_unexecuted_blocks=1 00:06:41.611 00:06:41.611 ' 00:06:41.611 02:10:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:41.611 02:10:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:41.611 02:10:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:41.611 02:10:43 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.611 02:10:43 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.611 02:10:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.611 ************************************ 00:06:41.611 START TEST skip_rpc 00:06:41.611 ************************************ 00:06:41.611 02:10:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:41.611 02:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70105 00:06:41.611 02:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.611 02:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:41.611 02:10:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:41.611 [2024-11-08 02:10:43.474472] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:41.611 [2024-11-08 02:10:43.474598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70105 ] 00:06:41.870 [2024-11-08 02:10:43.616212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.870 [2024-11-08 02:10:43.653363] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.870 [2024-11-08 02:10:43.688914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70105 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70105 ']' 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70105 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70105 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.139 killing process with pid 70105 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70105' 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70105 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70105 00:06:47.139 00:06:47.139 real 0m5.289s 00:06:47.139 user 0m5.020s 00:06:47.139 sys 0m0.185s 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.139 02:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.139 ************************************ 00:06:47.139 END TEST skip_rpc 00:06:47.139 ************************************ 00:06:47.139 02:10:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:47.139 02:10:48 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.139 02:10:48 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.139 02:10:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.139 ************************************ 00:06:47.139 START TEST skip_rpc_with_json 00:06:47.139 ************************************ 00:06:47.139 02:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:47.139 02:10:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:47.139 02:10:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70191 00:06:47.139 02:10:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.139 02:10:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.139 02:10:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70191 00:06:47.139 02:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70191 ']' 00:06:47.139 02:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.140 02:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.140 02:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.140 02:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.140 02:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.140 [2024-11-08 02:10:48.819667] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:47.140 [2024-11-08 02:10:48.819793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70191 ] 00:06:47.140 [2024-11-08 02:10:48.968944] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.140 [2024-11-08 02:10:49.003843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.399 [2024-11-08 02:10:49.040572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.399 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.399 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:47.399 02:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:47.399 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.399 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.399 [2024-11-08 02:10:49.166387] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:47.399 request: 00:06:47.399 { 00:06:47.399 "trtype": "tcp", 00:06:47.399 "method": "nvmf_get_transports", 00:06:47.399 "req_id": 1 00:06:47.399 } 00:06:47.399 Got JSON-RPC error response 00:06:47.399 response: 00:06:47.399 { 00:06:47.399 "code": -19, 00:06:47.399 "message": "No such device" 00:06:47.399 } 00:06:47.399 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:47.399 02:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:47.399 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.399 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.399 [2024-11-08 02:10:49.178468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.399 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.399 02:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:47.399 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.399 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:47.659 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.659 02:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:47.659 { 00:06:47.659 "subsystems": [ 00:06:47.659 { 00:06:47.659 "subsystem": "fsdev", 00:06:47.659 "config": [ 00:06:47.659 { 00:06:47.659 "method": "fsdev_set_opts", 00:06:47.659 "params": { 00:06:47.659 "fsdev_io_pool_size": 65535, 00:06:47.659 "fsdev_io_cache_size": 256 00:06:47.659 } 00:06:47.659 } 00:06:47.659 ] 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "vfio_user_target", 00:06:47.659 "config": null 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "keyring", 00:06:47.659 "config": [] 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "iobuf", 00:06:47.659 "config": [ 00:06:47.659 { 00:06:47.659 "method": "iobuf_set_options", 00:06:47.659 "params": { 00:06:47.659 "small_pool_count": 8192, 00:06:47.659 "large_pool_count": 1024, 00:06:47.659 "small_bufsize": 8192, 00:06:47.659 "large_bufsize": 135168 00:06:47.659 } 00:06:47.659 } 00:06:47.659 ] 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "sock", 00:06:47.659 "config": [ 00:06:47.659 { 00:06:47.659 "method": "sock_set_default_impl", 00:06:47.659 "params": { 00:06:47.659 "impl_name": "uring" 00:06:47.659 } 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "method": "sock_impl_set_options", 00:06:47.659 "params": { 00:06:47.659 "impl_name": "ssl", 00:06:47.659 "recv_buf_size": 4096, 00:06:47.659 "send_buf_size": 4096, 00:06:47.659 "enable_recv_pipe": true, 00:06:47.659 "enable_quickack": false, 00:06:47.659 "enable_placement_id": 0, 00:06:47.659 "enable_zerocopy_send_server": true, 00:06:47.659 "enable_zerocopy_send_client": false, 00:06:47.659 "zerocopy_threshold": 0, 00:06:47.659 "tls_version": 0, 00:06:47.659 "enable_ktls": false 00:06:47.659 } 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "method": "sock_impl_set_options", 00:06:47.659 "params": { 00:06:47.659 "impl_name": "posix", 00:06:47.659 "recv_buf_size": 2097152, 00:06:47.659 "send_buf_size": 2097152, 00:06:47.659 "enable_recv_pipe": true, 00:06:47.659 "enable_quickack": false, 00:06:47.659 "enable_placement_id": 0, 00:06:47.659 "enable_zerocopy_send_server": true, 00:06:47.659 "enable_zerocopy_send_client": false, 00:06:47.659 "zerocopy_threshold": 0, 00:06:47.659 "tls_version": 0, 00:06:47.659 "enable_ktls": false 00:06:47.659 } 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "method": "sock_impl_set_options", 00:06:47.659 "params": { 00:06:47.659 "impl_name": "uring", 00:06:47.659 "recv_buf_size": 2097152, 00:06:47.659 "send_buf_size": 2097152, 00:06:47.659 "enable_recv_pipe": true, 00:06:47.659 "enable_quickack": false, 00:06:47.659 "enable_placement_id": 0, 00:06:47.659 "enable_zerocopy_send_server": false, 00:06:47.659 "enable_zerocopy_send_client": false, 00:06:47.659 "zerocopy_threshold": 0, 00:06:47.659 "tls_version": 0, 00:06:47.659 "enable_ktls": false 00:06:47.659 } 00:06:47.659 } 00:06:47.659 ] 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "vmd", 00:06:47.659 "config": [] 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "accel", 00:06:47.659 "config": [ 00:06:47.659 { 00:06:47.659 "method": "accel_set_options", 00:06:47.659 "params": { 00:06:47.659 "small_cache_size": 128, 00:06:47.659 "large_cache_size": 16, 00:06:47.659 "task_count": 2048, 00:06:47.659 "sequence_count": 2048, 00:06:47.659 "buf_count": 2048 00:06:47.659 } 00:06:47.659 } 00:06:47.659 ] 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "bdev", 00:06:47.659 "config": [ 00:06:47.659 { 00:06:47.659 "method": "bdev_set_options", 00:06:47.659 "params": { 00:06:47.659 "bdev_io_pool_size": 65535, 00:06:47.659 "bdev_io_cache_size": 256, 00:06:47.659 "bdev_auto_examine": true, 00:06:47.659 "iobuf_small_cache_size": 128, 00:06:47.659 "iobuf_large_cache_size": 16 00:06:47.659 } 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "method": "bdev_raid_set_options", 00:06:47.659 "params": { 00:06:47.659 "process_window_size_kb": 1024, 00:06:47.659 "process_max_bandwidth_mb_sec": 0 00:06:47.659 } 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "method": "bdev_iscsi_set_options", 00:06:47.659 "params": { 00:06:47.659 "timeout_sec": 30 00:06:47.659 } 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "method": "bdev_nvme_set_options", 00:06:47.659 "params": { 00:06:47.659 "action_on_timeout": "none", 00:06:47.659 "timeout_us": 0, 00:06:47.659 "timeout_admin_us": 0, 00:06:47.659 "keep_alive_timeout_ms": 10000, 00:06:47.659 "arbitration_burst": 0, 00:06:47.659 "low_priority_weight": 0, 00:06:47.659 "medium_priority_weight": 0, 00:06:47.659 "high_priority_weight": 0, 00:06:47.659 "nvme_adminq_poll_period_us": 10000, 00:06:47.659 "nvme_ioq_poll_period_us": 0, 00:06:47.659 "io_queue_requests": 0, 00:06:47.659 "delay_cmd_submit": true, 00:06:47.659 "transport_retry_count": 4, 00:06:47.659 "bdev_retry_count": 3, 00:06:47.659 "transport_ack_timeout": 0, 00:06:47.659 "ctrlr_loss_timeout_sec": 0, 00:06:47.659 "reconnect_delay_sec": 0, 00:06:47.659 "fast_io_fail_timeout_sec": 0, 00:06:47.659 "disable_auto_failback": false, 00:06:47.659 "generate_uuids": false, 00:06:47.659 "transport_tos": 0, 00:06:47.659 "nvme_error_stat": false, 00:06:47.659 "rdma_srq_size": 0, 00:06:47.659 "io_path_stat": false, 00:06:47.659 "allow_accel_sequence": false, 00:06:47.659 "rdma_max_cq_size": 0, 00:06:47.659 "rdma_cm_event_timeout_ms": 0, 00:06:47.659 "dhchap_digests": [ 00:06:47.659 "sha256", 00:06:47.659 "sha384", 00:06:47.659 "sha512" 00:06:47.659 ], 00:06:47.659 "dhchap_dhgroups": [ 00:06:47.659 "null", 00:06:47.659 "ffdhe2048", 00:06:47.659 "ffdhe3072", 00:06:47.659 "ffdhe4096", 00:06:47.659 "ffdhe6144", 00:06:47.659 "ffdhe8192" 00:06:47.659 ] 00:06:47.659 } 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "method": "bdev_nvme_set_hotplug", 00:06:47.659 "params": { 00:06:47.659 "period_us": 100000, 00:06:47.659 "enable": false 00:06:47.659 } 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "method": "bdev_wait_for_examine" 00:06:47.659 } 00:06:47.659 ] 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "scsi", 00:06:47.659 "config": null 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "scheduler", 00:06:47.659 "config": [ 00:06:47.659 { 00:06:47.659 "method": "framework_set_scheduler", 00:06:47.659 "params": { 00:06:47.659 "name": "static" 00:06:47.659 } 00:06:47.659 } 00:06:47.659 ] 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "vhost_scsi", 00:06:47.659 "config": [] 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "vhost_blk", 00:06:47.659 "config": [] 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "ublk", 00:06:47.659 "config": [] 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "nbd", 00:06:47.659 "config": [] 00:06:47.659 }, 00:06:47.659 { 00:06:47.659 "subsystem": "nvmf", 00:06:47.659 "config": [ 00:06:47.659 { 00:06:47.659 "method": "nvmf_set_config", 00:06:47.659 "params": { 00:06:47.659 "discovery_filter": "match_any", 00:06:47.659 "admin_cmd_passthru": { 00:06:47.659 "identify_ctrlr": false 00:06:47.659 }, 00:06:47.660 "dhchap_digests": [ 00:06:47.660 "sha256", 00:06:47.660 "sha384", 00:06:47.660 "sha512" 00:06:47.660 ], 00:06:47.660 "dhchap_dhgroups": [ 00:06:47.660 "null", 00:06:47.660 "ffdhe2048", 00:06:47.660 "ffdhe3072", 00:06:47.660 "ffdhe4096", 00:06:47.660 "ffdhe6144", 00:06:47.660 "ffdhe8192" 00:06:47.660 ] 00:06:47.660 } 00:06:47.660 }, 00:06:47.660 { 00:06:47.660 "method": "nvmf_set_max_subsystems", 00:06:47.660 "params": { 00:06:47.660 "max_subsystems": 1024 00:06:47.660 } 00:06:47.660 }, 00:06:47.660 { 00:06:47.660 "method": "nvmf_set_crdt", 00:06:47.660 "params": { 00:06:47.660 "crdt1": 0, 00:06:47.660 "crdt2": 0, 00:06:47.660 "crdt3": 0 00:06:47.660 } 00:06:47.660 }, 00:06:47.660 { 00:06:47.660 "method": "nvmf_create_transport", 00:06:47.660 "params": { 00:06:47.660 "trtype": "TCP", 00:06:47.660 "max_queue_depth": 128, 00:06:47.660 "max_io_qpairs_per_ctrlr": 127, 00:06:47.660 "in_capsule_data_size": 4096, 00:06:47.660 "max_io_size": 131072, 00:06:47.660 "io_unit_size": 131072, 00:06:47.660 "max_aq_depth": 128, 00:06:47.660 "num_shared_buffers": 511, 00:06:47.660 "buf_cache_size": 4294967295, 00:06:47.660 "dif_insert_or_strip": false, 00:06:47.660 "zcopy": false, 00:06:47.660 "c2h_success": true, 00:06:47.660 "sock_priority": 0, 00:06:47.660 "abort_timeout_sec": 1, 00:06:47.660 "ack_timeout": 0, 00:06:47.660 "data_wr_pool_size": 0 00:06:47.660 } 00:06:47.660 } 00:06:47.660 ] 00:06:47.660 }, 00:06:47.660 { 00:06:47.660 "subsystem": "iscsi", 00:06:47.660 "config": [ 00:06:47.660 { 00:06:47.660 "method": "iscsi_set_options", 00:06:47.660 "params": { 00:06:47.660 "node_base": "iqn.2016-06.io.spdk", 00:06:47.660 "max_sessions": 128, 00:06:47.660 "max_connections_per_session": 2, 00:06:47.660 "max_queue_depth": 64, 00:06:47.660 "default_time2wait": 2, 00:06:47.660 "default_time2retain": 20, 00:06:47.660 "first_burst_length": 8192, 00:06:47.660 "immediate_data": true, 00:06:47.660 "allow_duplicated_isid": false, 00:06:47.660 "error_recovery_level": 0, 00:06:47.660 "nop_timeout": 60, 00:06:47.660 "nop_in_interval": 30, 00:06:47.660 "disable_chap": false, 00:06:47.660 "require_chap": false, 00:06:47.660 "mutual_chap": false, 00:06:47.660 "chap_group": 0, 00:06:47.660 "max_large_datain_per_connection": 64, 00:06:47.660 "max_r2t_per_connection": 4, 00:06:47.660 "pdu_pool_size": 36864, 00:06:47.660 "immediate_data_pool_size": 16384, 00:06:47.660 "data_out_pool_size": 2048 00:06:47.660 } 00:06:47.660 } 00:06:47.660 ] 00:06:47.660 } 00:06:47.660 ] 00:06:47.660 } 00:06:47.660 02:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:47.660 02:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70191 00:06:47.660 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70191 ']' 00:06:47.660 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70191 00:06:47.660 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:47.660 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.660 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70191 00:06:47.660 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.660 killing process with pid 70191 00:06:47.660 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.660 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70191' 00:06:47.660 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70191 00:06:47.660 02:10:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70191 00:06:47.921 02:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70206 00:06:47.921 02:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:47.921 02:10:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70206 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70206 ']' 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70206 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70206 00:06:53.193 killing process with pid 70206 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70206' 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70206 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70206 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:53.193 ************************************ 00:06:53.193 END TEST skip_rpc_with_json 00:06:53.193 ************************************ 00:06:53.193 00:06:53.193 real 0m6.172s 00:06:53.193 user 0m5.875s 00:06:53.193 sys 0m0.462s 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.193 02:10:54 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:53.193 02:10:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.193 02:10:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.193 02:10:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.193 ************************************ 00:06:53.193 START TEST skip_rpc_with_delay 00:06:53.193 ************************************ 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:53.193 02:10:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.193 [2024-11-08 02:10:55.021440] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:53.193 [2024-11-08 02:10:55.021539] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:53.193 02:10:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:53.193 ************************************ 00:06:53.193 END TEST skip_rpc_with_delay 00:06:53.193 ************************************ 00:06:53.193 02:10:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.193 02:10:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.193 02:10:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.193 00:06:53.193 real 0m0.072s 00:06:53.193 user 0m0.046s 00:06:53.193 sys 0m0.026s 00:06:53.193 02:10:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.193 02:10:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:53.452 02:10:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:53.452 02:10:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:53.452 02:10:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:53.452 02:10:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.452 02:10:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.452 02:10:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.452 ************************************ 00:06:53.452 START TEST exit_on_failed_rpc_init 00:06:53.452 ************************************ 00:06:53.452 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:53.452 02:10:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70315 00:06:53.452 02:10:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70315 00:06:53.452 02:10:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.452 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70315 ']' 00:06:53.452 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.452 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.452 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.452 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.452 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:53.452 [2024-11-08 02:10:55.162475] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:53.452 [2024-11-08 02:10:55.162566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70315 ] 00:06:53.452 [2024-11-08 02:10:55.303957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.711 [2024-11-08 02:10:55.339048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.711 [2024-11-08 02:10:55.375328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:53.711 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:53.711 [2024-11-08 02:10:55.563018] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:53.711 [2024-11-08 02:10:55.563333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70326 ] 00:06:53.970 [2024-11-08 02:10:55.702679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.970 [2024-11-08 02:10:55.743568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.970 [2024-11-08 02:10:55.743946] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:53.970 [2024-11-08 02:10:55.743971] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:53.970 [2024-11-08 02:10:55.743982] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.970 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:53.970 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.970 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:53.970 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70315 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70315 ']' 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70315 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70315 00:06:53.971 killing process with pid 70315 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70315' 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70315 00:06:53.971 02:10:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70315 00:06:54.230 ************************************ 00:06:54.230 END TEST exit_on_failed_rpc_init 00:06:54.230 ************************************ 00:06:54.230 00:06:54.230 real 0m0.978s 00:06:54.230 user 0m1.140s 00:06:54.230 sys 0m0.270s 00:06:54.230 02:10:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.230 02:10:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:54.489 02:10:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:54.489 00:06:54.489 real 0m12.912s 00:06:54.489 user 0m12.281s 00:06:54.489 sys 0m1.134s 00:06:54.489 02:10:56 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.489 ************************************ 00:06:54.489 END TEST skip_rpc 00:06:54.489 ************************************ 00:06:54.489 02:10:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.489 02:10:56 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:54.489 02:10:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.489 02:10:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.489 02:10:56 -- common/autotest_common.sh@10 -- # set +x 00:06:54.489 ************************************ 00:06:54.489 START TEST rpc_client 00:06:54.489 ************************************ 00:06:54.489 02:10:56 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:54.489 * Looking for test storage... 00:06:54.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:54.489 02:10:56 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:54.489 02:10:56 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:54.489 02:10:56 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:54.489 02:10:56 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:54.489 02:10:56 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.490 02:10:56 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:54.490 02:10:56 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.490 02:10:56 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:54.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.490 --rc genhtml_branch_coverage=1 00:06:54.490 --rc genhtml_function_coverage=1 00:06:54.490 --rc genhtml_legend=1 00:06:54.490 --rc geninfo_all_blocks=1 00:06:54.490 --rc geninfo_unexecuted_blocks=1 00:06:54.490 00:06:54.490 ' 00:06:54.490 02:10:56 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:54.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.490 --rc genhtml_branch_coverage=1 00:06:54.490 --rc genhtml_function_coverage=1 00:06:54.490 --rc genhtml_legend=1 00:06:54.490 --rc geninfo_all_blocks=1 00:06:54.490 --rc geninfo_unexecuted_blocks=1 00:06:54.490 00:06:54.490 ' 00:06:54.490 02:10:56 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:54.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.490 --rc genhtml_branch_coverage=1 00:06:54.490 --rc genhtml_function_coverage=1 00:06:54.490 --rc genhtml_legend=1 00:06:54.490 --rc geninfo_all_blocks=1 00:06:54.490 --rc geninfo_unexecuted_blocks=1 00:06:54.490 00:06:54.490 ' 00:06:54.490 02:10:56 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:54.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.490 --rc genhtml_branch_coverage=1 00:06:54.490 --rc genhtml_function_coverage=1 00:06:54.490 --rc genhtml_legend=1 00:06:54.490 --rc geninfo_all_blocks=1 00:06:54.490 --rc geninfo_unexecuted_blocks=1 00:06:54.490 00:06:54.490 ' 00:06:54.490 02:10:56 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:54.750 OK 00:06:54.750 02:10:56 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:54.750 00:06:54.750 real 0m0.205s 00:06:54.750 user 0m0.125s 00:06:54.750 sys 0m0.091s 00:06:54.750 ************************************ 00:06:54.750 END TEST rpc_client 00:06:54.751 ************************************ 00:06:54.751 02:10:56 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.751 02:10:56 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:54.751 02:10:56 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:54.751 02:10:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.751 02:10:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.751 02:10:56 -- common/autotest_common.sh@10 -- # set +x 00:06:54.751 ************************************ 00:06:54.751 START TEST json_config 00:06:54.751 ************************************ 00:06:54.751 02:10:56 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:54.751 02:10:56 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:54.751 02:10:56 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:54.751 02:10:56 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:54.751 02:10:56 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:54.751 02:10:56 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.751 02:10:56 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.751 02:10:56 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.751 02:10:56 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.751 02:10:56 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.751 02:10:56 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.751 02:10:56 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.751 02:10:56 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.751 02:10:56 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.751 02:10:56 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.751 02:10:56 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.751 02:10:56 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:54.751 02:10:56 json_config -- scripts/common.sh@345 -- # : 1 00:06:54.751 02:10:56 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.751 02:10:56 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.751 02:10:56 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:54.751 02:10:56 json_config -- scripts/common.sh@353 -- # local d=1 00:06:54.751 02:10:56 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.751 02:10:56 json_config -- scripts/common.sh@355 -- # echo 1 00:06:54.751 02:10:56 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.751 02:10:56 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:54.751 02:10:56 json_config -- scripts/common.sh@353 -- # local d=2 00:06:54.751 02:10:56 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.751 02:10:56 json_config -- scripts/common.sh@355 -- # echo 2 00:06:54.751 02:10:56 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.751 02:10:56 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.751 02:10:56 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.751 02:10:56 json_config -- scripts/common.sh@368 -- # return 0 00:06:54.751 02:10:56 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.751 02:10:56 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:54.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.751 --rc genhtml_branch_coverage=1 00:06:54.751 --rc genhtml_function_coverage=1 00:06:54.751 --rc genhtml_legend=1 00:06:54.751 --rc geninfo_all_blocks=1 00:06:54.751 --rc geninfo_unexecuted_blocks=1 00:06:54.751 00:06:54.751 ' 00:06:54.751 02:10:56 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:54.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.751 --rc genhtml_branch_coverage=1 00:06:54.751 --rc genhtml_function_coverage=1 00:06:54.751 --rc genhtml_legend=1 00:06:54.751 --rc geninfo_all_blocks=1 00:06:54.751 --rc geninfo_unexecuted_blocks=1 00:06:54.751 00:06:54.751 ' 00:06:54.751 02:10:56 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:54.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.751 --rc genhtml_branch_coverage=1 00:06:54.751 --rc genhtml_function_coverage=1 00:06:54.751 --rc genhtml_legend=1 00:06:54.751 --rc geninfo_all_blocks=1 00:06:54.751 --rc geninfo_unexecuted_blocks=1 00:06:54.751 00:06:54.751 ' 00:06:54.751 02:10:56 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:54.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.751 --rc genhtml_branch_coverage=1 00:06:54.751 --rc genhtml_function_coverage=1 00:06:54.751 --rc genhtml_legend=1 00:06:54.751 --rc geninfo_all_blocks=1 00:06:54.751 --rc geninfo_unexecuted_blocks=1 00:06:54.751 00:06:54.751 ' 00:06:54.751 02:10:56 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:54.751 02:10:56 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:54.751 02:10:56 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.751 02:10:56 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.751 02:10:56 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.751 02:10:56 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.751 02:10:56 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.751 02:10:56 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.751 02:10:56 json_config -- paths/export.sh@5 -- # export PATH 00:06:54.751 02:10:56 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@51 -- # : 0 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:54.751 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:54.751 02:10:56 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:54.751 02:10:56 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:54.751 02:10:56 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:54.751 02:10:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:54.751 02:10:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:54.751 02:10:56 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:54.751 02:10:56 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:54.751 INFO: JSON configuration test init 00:06:54.751 02:10:56 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:54.751 02:10:56 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:54.751 02:10:56 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:54.752 02:10:56 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:54.752 02:10:56 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:54.752 02:10:56 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:54.752 02:10:56 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:54.752 02:10:56 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:54.752 02:10:56 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:54.752 02:10:56 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:54.752 02:10:56 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:54.752 02:10:56 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:54.752 02:10:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:54.752 02:10:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.752 02:10:56 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:54.752 02:10:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:54.752 02:10:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.752 02:10:56 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:54.752 02:10:56 json_config -- json_config/common.sh@9 -- # local app=target 00:06:54.752 02:10:56 json_config -- json_config/common.sh@10 -- # shift 00:06:54.752 02:10:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:54.752 02:10:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:54.752 02:10:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:54.752 02:10:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:54.752 02:10:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:54.752 02:10:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=70460 00:06:54.752 02:10:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:54.752 Waiting for target to run... 00:06:54.752 02:10:56 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:54.752 02:10:56 json_config -- json_config/common.sh@25 -- # waitforlisten 70460 /var/tmp/spdk_tgt.sock 00:06:54.752 02:10:56 json_config -- common/autotest_common.sh@831 -- # '[' -z 70460 ']' 00:06:54.752 02:10:56 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:54.752 02:10:56 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.752 02:10:56 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:54.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:54.752 02:10:56 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.752 02:10:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.021 [2024-11-08 02:10:56.695788] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:55.021 [2024-11-08 02:10:56.696137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70460 ] 00:06:55.287 [2024-11-08 02:10:57.011816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.287 [2024-11-08 02:10:57.033252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.855 00:06:55.855 02:10:57 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.855 02:10:57 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:55.855 02:10:57 json_config -- json_config/common.sh@26 -- # echo '' 00:06:55.855 02:10:57 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:55.855 02:10:57 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:55.855 02:10:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:55.855 02:10:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.855 02:10:57 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:55.855 02:10:57 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:55.855 02:10:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:55.855 02:10:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:55.855 02:10:57 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:55.855 02:10:57 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:55.855 02:10:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:56.117 [2024-11-08 02:10:57.979382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.376 02:10:58 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:56.376 02:10:58 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:56.376 02:10:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:56.376 02:10:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.376 02:10:58 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:56.376 02:10:58 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:56.376 02:10:58 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:56.376 02:10:58 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:56.376 02:10:58 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:56.376 02:10:58 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:56.376 02:10:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:56.376 02:10:58 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@54 -- # sort 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:56.635 02:10:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:56.635 02:10:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:56.635 02:10:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:56.635 02:10:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:56.635 02:10:58 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:56.635 02:10:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:57.202 MallocForNvmf0 00:06:57.202 02:10:58 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:57.202 02:10:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:57.202 MallocForNvmf1 00:06:57.202 02:10:59 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:57.202 02:10:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:57.461 [2024-11-08 02:10:59.337046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.719 02:10:59 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:57.719 02:10:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:57.719 02:10:59 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:57.719 02:10:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:57.976 02:10:59 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:57.976 02:10:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:58.234 02:11:00 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:58.234 02:11:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:58.492 [2024-11-08 02:11:00.265532] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:58.492 02:11:00 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:58.492 02:11:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:58.492 02:11:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.492 02:11:00 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:58.492 02:11:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:58.492 02:11:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.492 02:11:00 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:58.492 02:11:00 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:58.492 02:11:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:58.751 MallocBdevForConfigChangeCheck 00:06:58.751 02:11:00 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:58.751 02:11:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:58.751 02:11:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.751 02:11:00 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:58.751 02:11:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:59.319 INFO: shutting down applications... 00:06:59.319 02:11:01 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:59.319 02:11:01 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:59.319 02:11:01 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:59.319 02:11:01 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:59.319 02:11:01 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:59.578 Calling clear_iscsi_subsystem 00:06:59.578 Calling clear_nvmf_subsystem 00:06:59.578 Calling clear_nbd_subsystem 00:06:59.578 Calling clear_ublk_subsystem 00:06:59.578 Calling clear_vhost_blk_subsystem 00:06:59.578 Calling clear_vhost_scsi_subsystem 00:06:59.578 Calling clear_bdev_subsystem 00:06:59.578 02:11:01 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:59.578 02:11:01 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:59.578 02:11:01 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:59.578 02:11:01 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:59.578 02:11:01 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:59.578 02:11:01 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:00.145 02:11:01 json_config -- json_config/json_config.sh@352 -- # break 00:07:00.145 02:11:01 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:00.145 02:11:01 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:00.145 02:11:01 json_config -- json_config/common.sh@31 -- # local app=target 00:07:00.145 02:11:01 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:00.145 02:11:01 json_config -- json_config/common.sh@35 -- # [[ -n 70460 ]] 00:07:00.145 02:11:01 json_config -- json_config/common.sh@38 -- # kill -SIGINT 70460 00:07:00.145 02:11:01 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:00.145 02:11:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:00.145 02:11:01 json_config -- json_config/common.sh@41 -- # kill -0 70460 00:07:00.145 02:11:01 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:00.712 02:11:02 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:00.712 02:11:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:00.712 02:11:02 json_config -- json_config/common.sh@41 -- # kill -0 70460 00:07:00.712 02:11:02 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:00.712 02:11:02 json_config -- json_config/common.sh@43 -- # break 00:07:00.712 SPDK target shutdown done 00:07:00.712 INFO: relaunching applications... 00:07:00.712 02:11:02 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:00.712 02:11:02 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:00.712 02:11:02 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:00.712 02:11:02 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:00.712 02:11:02 json_config -- json_config/common.sh@9 -- # local app=target 00:07:00.712 02:11:02 json_config -- json_config/common.sh@10 -- # shift 00:07:00.712 02:11:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:00.712 02:11:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:00.712 02:11:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:00.712 02:11:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:00.712 02:11:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:00.712 02:11:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=70655 00:07:00.712 02:11:02 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:00.713 02:11:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:00.713 Waiting for target to run... 00:07:00.713 02:11:02 json_config -- json_config/common.sh@25 -- # waitforlisten 70655 /var/tmp/spdk_tgt.sock 00:07:00.713 02:11:02 json_config -- common/autotest_common.sh@831 -- # '[' -z 70655 ']' 00:07:00.713 02:11:02 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:00.713 02:11:02 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.713 02:11:02 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:00.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:00.713 02:11:02 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.713 02:11:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.713 [2024-11-08 02:11:02.351317] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:00.713 [2024-11-08 02:11:02.351558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70655 ] 00:07:00.971 [2024-11-08 02:11:02.630426] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.971 [2024-11-08 02:11:02.650723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.971 [2024-11-08 02:11:02.778312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.230 [2024-11-08 02:11:02.966554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.230 [2024-11-08 02:11:02.998612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:01.797 00:07:01.797 INFO: Checking if target configuration is the same... 00:07:01.797 02:11:03 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.797 02:11:03 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:01.797 02:11:03 json_config -- json_config/common.sh@26 -- # echo '' 00:07:01.797 02:11:03 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:01.797 02:11:03 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:01.797 02:11:03 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:01.797 02:11:03 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:01.797 02:11:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:01.797 + '[' 2 -ne 2 ']' 00:07:01.797 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:01.797 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:01.797 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:01.797 +++ basename /dev/fd/62 00:07:01.797 ++ mktemp /tmp/62.XXX 00:07:01.797 + tmp_file_1=/tmp/62.lR5 00:07:01.797 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:01.797 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:01.797 + tmp_file_2=/tmp/spdk_tgt_config.json.p9H 00:07:01.797 + ret=0 00:07:01.797 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:02.056 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:02.056 + diff -u /tmp/62.lR5 /tmp/spdk_tgt_config.json.p9H 00:07:02.056 INFO: JSON config files are the same 00:07:02.056 + echo 'INFO: JSON config files are the same' 00:07:02.056 + rm /tmp/62.lR5 /tmp/spdk_tgt_config.json.p9H 00:07:02.056 + exit 0 00:07:02.056 INFO: changing configuration and checking if this can be detected... 00:07:02.056 02:11:03 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:02.056 02:11:03 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:02.056 02:11:03 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:02.056 02:11:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:02.315 02:11:04 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:02.315 02:11:04 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:02.315 02:11:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:02.315 + '[' 2 -ne 2 ']' 00:07:02.315 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:02.315 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:02.315 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:02.315 +++ basename /dev/fd/62 00:07:02.315 ++ mktemp /tmp/62.XXX 00:07:02.315 + tmp_file_1=/tmp/62.JPk 00:07:02.315 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:02.315 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:02.315 + tmp_file_2=/tmp/spdk_tgt_config.json.gJq 00:07:02.315 + ret=0 00:07:02.315 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:02.883 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:02.883 + diff -u /tmp/62.JPk /tmp/spdk_tgt_config.json.gJq 00:07:02.883 + ret=1 00:07:02.883 + echo '=== Start of file: /tmp/62.JPk ===' 00:07:02.883 + cat /tmp/62.JPk 00:07:02.883 + echo '=== End of file: /tmp/62.JPk ===' 00:07:02.883 + echo '' 00:07:02.883 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gJq ===' 00:07:02.883 + cat /tmp/spdk_tgt_config.json.gJq 00:07:02.883 + echo '=== End of file: /tmp/spdk_tgt_config.json.gJq ===' 00:07:02.883 + echo '' 00:07:02.883 + rm /tmp/62.JPk /tmp/spdk_tgt_config.json.gJq 00:07:02.883 + exit 1 00:07:02.883 INFO: configuration change detected. 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@324 -- # [[ -n 70655 ]] 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.883 02:11:04 json_config -- json_config/json_config.sh@330 -- # killprocess 70655 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@950 -- # '[' -z 70655 ']' 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@954 -- # kill -0 70655 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@955 -- # uname 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70655 00:07:02.883 killing process with pid 70655 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70655' 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@969 -- # kill 70655 00:07:02.883 02:11:04 json_config -- common/autotest_common.sh@974 -- # wait 70655 00:07:03.143 02:11:04 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:03.143 02:11:04 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:03.143 02:11:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:03.143 02:11:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.143 INFO: Success 00:07:03.143 02:11:04 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:03.143 02:11:04 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:03.143 00:07:03.143 real 0m8.485s 00:07:03.143 user 0m12.335s 00:07:03.143 sys 0m1.435s 00:07:03.143 ************************************ 00:07:03.143 END TEST json_config 00:07:03.143 ************************************ 00:07:03.143 02:11:04 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.143 02:11:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.143 02:11:04 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:03.143 02:11:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.143 02:11:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.143 02:11:04 -- common/autotest_common.sh@10 -- # set +x 00:07:03.143 ************************************ 00:07:03.143 START TEST json_config_extra_key 00:07:03.143 ************************************ 00:07:03.143 02:11:04 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:03.143 02:11:05 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.143 02:11:05 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.143 02:11:05 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:03.403 02:11:05 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.403 02:11:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:03.403 02:11:05 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.403 02:11:05 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:03.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.403 --rc genhtml_branch_coverage=1 00:07:03.403 --rc genhtml_function_coverage=1 00:07:03.403 --rc genhtml_legend=1 00:07:03.403 --rc geninfo_all_blocks=1 00:07:03.403 --rc geninfo_unexecuted_blocks=1 00:07:03.403 00:07:03.403 ' 00:07:03.403 02:11:05 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:03.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.403 --rc genhtml_branch_coverage=1 00:07:03.403 --rc genhtml_function_coverage=1 00:07:03.403 --rc genhtml_legend=1 00:07:03.403 --rc geninfo_all_blocks=1 00:07:03.403 --rc geninfo_unexecuted_blocks=1 00:07:03.403 00:07:03.403 ' 00:07:03.403 02:11:05 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:03.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.403 --rc genhtml_branch_coverage=1 00:07:03.403 --rc genhtml_function_coverage=1 00:07:03.403 --rc genhtml_legend=1 00:07:03.403 --rc geninfo_all_blocks=1 00:07:03.403 --rc geninfo_unexecuted_blocks=1 00:07:03.403 00:07:03.403 ' 00:07:03.403 02:11:05 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:03.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.403 --rc genhtml_branch_coverage=1 00:07:03.403 --rc genhtml_function_coverage=1 00:07:03.403 --rc genhtml_legend=1 00:07:03.403 --rc geninfo_all_blocks=1 00:07:03.403 --rc geninfo_unexecuted_blocks=1 00:07:03.403 00:07:03.403 ' 00:07:03.403 02:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:03.403 02:11:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:03.403 02:11:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.403 02:11:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.403 02:11:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.403 02:11:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.403 02:11:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.403 02:11:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.403 02:11:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.403 02:11:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.403 02:11:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.403 02:11:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:03.404 02:11:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:03.404 02:11:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.404 02:11:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.404 02:11:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.404 02:11:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.404 02:11:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.404 02:11:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.404 02:11:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:03.404 02:11:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:03.404 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:03.404 02:11:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:03.404 INFO: launching applications... 00:07:03.404 Waiting for target to run... 00:07:03.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:03.404 02:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:03.404 02:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:03.404 02:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:03.404 02:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:03.404 02:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:03.404 02:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:03.404 02:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:03.404 02:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:03.404 02:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:03.404 02:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:03.404 02:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:03.404 02:11:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:03.404 02:11:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:03.404 02:11:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:03.404 02:11:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:03.404 02:11:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:03.404 02:11:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:03.404 02:11:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:03.404 02:11:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:03.404 02:11:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70804 00:07:03.404 02:11:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:03.404 02:11:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70804 /var/tmp/spdk_tgt.sock 00:07:03.404 02:11:05 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70804 ']' 00:07:03.404 02:11:05 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:03.404 02:11:05 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:03.404 02:11:05 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.404 02:11:05 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:03.404 02:11:05 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.404 02:11:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:03.404 [2024-11-08 02:11:05.207808] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:03.404 [2024-11-08 02:11:05.208077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70804 ] 00:07:03.663 [2024-11-08 02:11:05.507433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.663 [2024-11-08 02:11:05.526932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.922 [2024-11-08 02:11:05.550787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.489 02:11:06 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.489 02:11:06 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:04.489 02:11:06 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:04.489 00:07:04.489 02:11:06 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:04.489 INFO: shutting down applications... 00:07:04.489 02:11:06 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:04.489 02:11:06 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:04.489 02:11:06 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:04.489 02:11:06 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70804 ]] 00:07:04.489 02:11:06 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70804 00:07:04.489 02:11:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:04.489 02:11:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:04.489 02:11:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70804 00:07:04.489 02:11:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:05.057 02:11:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:05.057 02:11:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:05.057 02:11:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70804 00:07:05.057 02:11:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:05.057 02:11:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:05.057 SPDK target shutdown done 00:07:05.057 Success 00:07:05.057 02:11:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:05.057 02:11:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:05.057 02:11:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:05.057 00:07:05.057 real 0m1.789s 00:07:05.057 user 0m1.675s 00:07:05.057 sys 0m0.325s 00:07:05.057 ************************************ 00:07:05.057 END TEST json_config_extra_key 00:07:05.057 ************************************ 00:07:05.057 02:11:06 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.057 02:11:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:05.057 02:11:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:05.057 02:11:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.057 02:11:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.057 02:11:06 -- common/autotest_common.sh@10 -- # set +x 00:07:05.057 ************************************ 00:07:05.057 START TEST alias_rpc 00:07:05.057 ************************************ 00:07:05.057 02:11:06 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:05.057 * Looking for test storage... 00:07:05.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:05.057 02:11:06 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:05.057 02:11:06 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:05.057 02:11:06 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:05.317 02:11:06 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:05.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.317 02:11:06 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:05.317 02:11:06 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.317 02:11:06 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:05.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.317 --rc genhtml_branch_coverage=1 00:07:05.317 --rc genhtml_function_coverage=1 00:07:05.317 --rc genhtml_legend=1 00:07:05.317 --rc geninfo_all_blocks=1 00:07:05.317 --rc geninfo_unexecuted_blocks=1 00:07:05.317 00:07:05.317 ' 00:07:05.317 02:11:06 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:05.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.317 --rc genhtml_branch_coverage=1 00:07:05.317 --rc genhtml_function_coverage=1 00:07:05.317 --rc genhtml_legend=1 00:07:05.317 --rc geninfo_all_blocks=1 00:07:05.317 --rc geninfo_unexecuted_blocks=1 00:07:05.317 00:07:05.317 ' 00:07:05.317 02:11:06 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:05.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.317 --rc genhtml_branch_coverage=1 00:07:05.317 --rc genhtml_function_coverage=1 00:07:05.317 --rc genhtml_legend=1 00:07:05.317 --rc geninfo_all_blocks=1 00:07:05.317 --rc geninfo_unexecuted_blocks=1 00:07:05.317 00:07:05.317 ' 00:07:05.317 02:11:06 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:05.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.317 --rc genhtml_branch_coverage=1 00:07:05.317 --rc genhtml_function_coverage=1 00:07:05.317 --rc genhtml_legend=1 00:07:05.317 --rc geninfo_all_blocks=1 00:07:05.317 --rc geninfo_unexecuted_blocks=1 00:07:05.317 00:07:05.317 ' 00:07:05.317 02:11:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:05.317 02:11:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70882 00:07:05.317 02:11:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70882 00:07:05.317 02:11:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:05.317 02:11:06 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70882 ']' 00:07:05.317 02:11:06 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.317 02:11:06 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.317 02:11:06 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.317 02:11:06 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.317 02:11:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.317 [2024-11-08 02:11:07.037491] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:05.317 [2024-11-08 02:11:07.037809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70882 ] 00:07:05.317 [2024-11-08 02:11:07.177403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.576 [2024-11-08 02:11:07.212856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.576 [2024-11-08 02:11:07.248227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.576 02:11:07 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.576 02:11:07 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.576 02:11:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:05.835 02:11:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70882 00:07:05.835 02:11:07 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70882 ']' 00:07:05.835 02:11:07 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70882 00:07:05.835 02:11:07 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:05.835 02:11:07 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.835 02:11:07 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70882 00:07:06.094 02:11:07 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.094 killing process with pid 70882 00:07:06.094 02:11:07 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.094 02:11:07 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70882' 00:07:06.094 02:11:07 alias_rpc -- common/autotest_common.sh@969 -- # kill 70882 00:07:06.094 02:11:07 alias_rpc -- common/autotest_common.sh@974 -- # wait 70882 00:07:06.094 ************************************ 00:07:06.094 END TEST alias_rpc 00:07:06.094 ************************************ 00:07:06.094 00:07:06.094 real 0m1.157s 00:07:06.094 user 0m1.349s 00:07:06.094 sys 0m0.326s 00:07:06.094 02:11:07 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.094 02:11:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.353 02:11:07 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:06.353 02:11:07 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:06.353 02:11:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.353 02:11:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.353 02:11:07 -- common/autotest_common.sh@10 -- # set +x 00:07:06.353 ************************************ 00:07:06.353 START TEST spdkcli_tcp 00:07:06.353 ************************************ 00:07:06.353 02:11:07 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:06.353 * Looking for test storage... 00:07:06.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.353 02:11:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:06.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.353 --rc genhtml_branch_coverage=1 00:07:06.353 --rc genhtml_function_coverage=1 00:07:06.353 --rc genhtml_legend=1 00:07:06.353 --rc geninfo_all_blocks=1 00:07:06.353 --rc geninfo_unexecuted_blocks=1 00:07:06.353 00:07:06.353 ' 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:06.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.353 --rc genhtml_branch_coverage=1 00:07:06.353 --rc genhtml_function_coverage=1 00:07:06.353 --rc genhtml_legend=1 00:07:06.353 --rc geninfo_all_blocks=1 00:07:06.353 --rc geninfo_unexecuted_blocks=1 00:07:06.353 00:07:06.353 ' 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:06.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.353 --rc genhtml_branch_coverage=1 00:07:06.353 --rc genhtml_function_coverage=1 00:07:06.353 --rc genhtml_legend=1 00:07:06.353 --rc geninfo_all_blocks=1 00:07:06.353 --rc geninfo_unexecuted_blocks=1 00:07:06.353 00:07:06.353 ' 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:06.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.353 --rc genhtml_branch_coverage=1 00:07:06.353 --rc genhtml_function_coverage=1 00:07:06.353 --rc genhtml_legend=1 00:07:06.353 --rc geninfo_all_blocks=1 00:07:06.353 --rc geninfo_unexecuted_blocks=1 00:07:06.353 00:07:06.353 ' 00:07:06.353 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:06.353 02:11:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:06.353 02:11:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:06.353 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:06.353 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:06.353 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:06.353 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.353 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70953 00:07:06.353 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:06.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.353 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70953 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70953 ']' 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.353 02:11:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.353 [2024-11-08 02:11:08.232012] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:06.353 [2024-11-08 02:11:08.232135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70953 ] 00:07:06.612 [2024-11-08 02:11:08.365522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.612 [2024-11-08 02:11:08.398471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.612 [2024-11-08 02:11:08.398479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.612 [2024-11-08 02:11:08.433040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.870 02:11:08 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.870 02:11:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:06.870 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70962 00:07:06.870 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:06.870 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:07.130 [ 00:07:07.130 "bdev_malloc_delete", 00:07:07.130 "bdev_malloc_create", 00:07:07.130 "bdev_null_resize", 00:07:07.130 "bdev_null_delete", 00:07:07.130 "bdev_null_create", 00:07:07.130 "bdev_nvme_cuse_unregister", 00:07:07.130 "bdev_nvme_cuse_register", 00:07:07.130 "bdev_opal_new_user", 00:07:07.130 "bdev_opal_set_lock_state", 00:07:07.130 "bdev_opal_delete", 00:07:07.130 "bdev_opal_get_info", 00:07:07.130 "bdev_opal_create", 00:07:07.130 "bdev_nvme_opal_revert", 00:07:07.130 "bdev_nvme_opal_init", 00:07:07.130 "bdev_nvme_send_cmd", 00:07:07.130 "bdev_nvme_set_keys", 00:07:07.130 "bdev_nvme_get_path_iostat", 00:07:07.130 "bdev_nvme_get_mdns_discovery_info", 00:07:07.130 "bdev_nvme_stop_mdns_discovery", 00:07:07.130 "bdev_nvme_start_mdns_discovery", 00:07:07.130 "bdev_nvme_set_multipath_policy", 00:07:07.130 "bdev_nvme_set_preferred_path", 00:07:07.130 "bdev_nvme_get_io_paths", 00:07:07.130 "bdev_nvme_remove_error_injection", 00:07:07.130 "bdev_nvme_add_error_injection", 00:07:07.130 "bdev_nvme_get_discovery_info", 00:07:07.130 "bdev_nvme_stop_discovery", 00:07:07.130 "bdev_nvme_start_discovery", 00:07:07.130 "bdev_nvme_get_controller_health_info", 00:07:07.130 "bdev_nvme_disable_controller", 00:07:07.130 "bdev_nvme_enable_controller", 00:07:07.130 "bdev_nvme_reset_controller", 00:07:07.130 "bdev_nvme_get_transport_statistics", 00:07:07.130 "bdev_nvme_apply_firmware", 00:07:07.130 "bdev_nvme_detach_controller", 00:07:07.130 "bdev_nvme_get_controllers", 00:07:07.130 "bdev_nvme_attach_controller", 00:07:07.130 "bdev_nvme_set_hotplug", 00:07:07.130 "bdev_nvme_set_options", 00:07:07.130 "bdev_passthru_delete", 00:07:07.130 "bdev_passthru_create", 00:07:07.130 "bdev_lvol_set_parent_bdev", 00:07:07.130 "bdev_lvol_set_parent", 00:07:07.130 "bdev_lvol_check_shallow_copy", 00:07:07.130 "bdev_lvol_start_shallow_copy", 00:07:07.130 "bdev_lvol_grow_lvstore", 00:07:07.130 "bdev_lvol_get_lvols", 00:07:07.130 "bdev_lvol_get_lvstores", 00:07:07.130 "bdev_lvol_delete", 00:07:07.130 "bdev_lvol_set_read_only", 00:07:07.130 "bdev_lvol_resize", 00:07:07.130 "bdev_lvol_decouple_parent", 00:07:07.130 "bdev_lvol_inflate", 00:07:07.130 "bdev_lvol_rename", 00:07:07.130 "bdev_lvol_clone_bdev", 00:07:07.130 "bdev_lvol_clone", 00:07:07.130 "bdev_lvol_snapshot", 00:07:07.130 "bdev_lvol_create", 00:07:07.130 "bdev_lvol_delete_lvstore", 00:07:07.130 "bdev_lvol_rename_lvstore", 00:07:07.130 "bdev_lvol_create_lvstore", 00:07:07.130 "bdev_raid_set_options", 00:07:07.130 "bdev_raid_remove_base_bdev", 00:07:07.130 "bdev_raid_add_base_bdev", 00:07:07.130 "bdev_raid_delete", 00:07:07.130 "bdev_raid_create", 00:07:07.130 "bdev_raid_get_bdevs", 00:07:07.130 "bdev_error_inject_error", 00:07:07.130 "bdev_error_delete", 00:07:07.130 "bdev_error_create", 00:07:07.130 "bdev_split_delete", 00:07:07.130 "bdev_split_create", 00:07:07.130 "bdev_delay_delete", 00:07:07.130 "bdev_delay_create", 00:07:07.130 "bdev_delay_update_latency", 00:07:07.130 "bdev_zone_block_delete", 00:07:07.130 "bdev_zone_block_create", 00:07:07.130 "blobfs_create", 00:07:07.130 "blobfs_detect", 00:07:07.130 "blobfs_set_cache_size", 00:07:07.131 "bdev_aio_delete", 00:07:07.131 "bdev_aio_rescan", 00:07:07.131 "bdev_aio_create", 00:07:07.131 "bdev_ftl_set_property", 00:07:07.131 "bdev_ftl_get_properties", 00:07:07.131 "bdev_ftl_get_stats", 00:07:07.131 "bdev_ftl_unmap", 00:07:07.131 "bdev_ftl_unload", 00:07:07.131 "bdev_ftl_delete", 00:07:07.131 "bdev_ftl_load", 00:07:07.131 "bdev_ftl_create", 00:07:07.131 "bdev_virtio_attach_controller", 00:07:07.131 "bdev_virtio_scsi_get_devices", 00:07:07.131 "bdev_virtio_detach_controller", 00:07:07.131 "bdev_virtio_blk_set_hotplug", 00:07:07.131 "bdev_iscsi_delete", 00:07:07.131 "bdev_iscsi_create", 00:07:07.131 "bdev_iscsi_set_options", 00:07:07.131 "bdev_uring_delete", 00:07:07.131 "bdev_uring_rescan", 00:07:07.131 "bdev_uring_create", 00:07:07.131 "accel_error_inject_error", 00:07:07.131 "ioat_scan_accel_module", 00:07:07.131 "dsa_scan_accel_module", 00:07:07.131 "iaa_scan_accel_module", 00:07:07.131 "vfu_virtio_create_fs_endpoint", 00:07:07.131 "vfu_virtio_create_scsi_endpoint", 00:07:07.131 "vfu_virtio_scsi_remove_target", 00:07:07.131 "vfu_virtio_scsi_add_target", 00:07:07.131 "vfu_virtio_create_blk_endpoint", 00:07:07.131 "vfu_virtio_delete_endpoint", 00:07:07.131 "keyring_file_remove_key", 00:07:07.131 "keyring_file_add_key", 00:07:07.131 "keyring_linux_set_options", 00:07:07.131 "fsdev_aio_delete", 00:07:07.131 "fsdev_aio_create", 00:07:07.131 "iscsi_get_histogram", 00:07:07.131 "iscsi_enable_histogram", 00:07:07.131 "iscsi_set_options", 00:07:07.131 "iscsi_get_auth_groups", 00:07:07.131 "iscsi_auth_group_remove_secret", 00:07:07.131 "iscsi_auth_group_add_secret", 00:07:07.131 "iscsi_delete_auth_group", 00:07:07.131 "iscsi_create_auth_group", 00:07:07.131 "iscsi_set_discovery_auth", 00:07:07.131 "iscsi_get_options", 00:07:07.131 "iscsi_target_node_request_logout", 00:07:07.131 "iscsi_target_node_set_redirect", 00:07:07.131 "iscsi_target_node_set_auth", 00:07:07.131 "iscsi_target_node_add_lun", 00:07:07.131 "iscsi_get_stats", 00:07:07.131 "iscsi_get_connections", 00:07:07.131 "iscsi_portal_group_set_auth", 00:07:07.131 "iscsi_start_portal_group", 00:07:07.131 "iscsi_delete_portal_group", 00:07:07.131 "iscsi_create_portal_group", 00:07:07.131 "iscsi_get_portal_groups", 00:07:07.131 "iscsi_delete_target_node", 00:07:07.131 "iscsi_target_node_remove_pg_ig_maps", 00:07:07.131 "iscsi_target_node_add_pg_ig_maps", 00:07:07.131 "iscsi_create_target_node", 00:07:07.131 "iscsi_get_target_nodes", 00:07:07.131 "iscsi_delete_initiator_group", 00:07:07.131 "iscsi_initiator_group_remove_initiators", 00:07:07.131 "iscsi_initiator_group_add_initiators", 00:07:07.131 "iscsi_create_initiator_group", 00:07:07.131 "iscsi_get_initiator_groups", 00:07:07.131 "nvmf_set_crdt", 00:07:07.131 "nvmf_set_config", 00:07:07.131 "nvmf_set_max_subsystems", 00:07:07.131 "nvmf_stop_mdns_prr", 00:07:07.131 "nvmf_publish_mdns_prr", 00:07:07.131 "nvmf_subsystem_get_listeners", 00:07:07.131 "nvmf_subsystem_get_qpairs", 00:07:07.131 "nvmf_subsystem_get_controllers", 00:07:07.131 "nvmf_get_stats", 00:07:07.131 "nvmf_get_transports", 00:07:07.131 "nvmf_create_transport", 00:07:07.131 "nvmf_get_targets", 00:07:07.131 "nvmf_delete_target", 00:07:07.131 "nvmf_create_target", 00:07:07.131 "nvmf_subsystem_allow_any_host", 00:07:07.131 "nvmf_subsystem_set_keys", 00:07:07.131 "nvmf_subsystem_remove_host", 00:07:07.131 "nvmf_subsystem_add_host", 00:07:07.131 "nvmf_ns_remove_host", 00:07:07.131 "nvmf_ns_add_host", 00:07:07.131 "nvmf_subsystem_remove_ns", 00:07:07.131 "nvmf_subsystem_set_ns_ana_group", 00:07:07.131 "nvmf_subsystem_add_ns", 00:07:07.131 "nvmf_subsystem_listener_set_ana_state", 00:07:07.131 "nvmf_discovery_get_referrals", 00:07:07.131 "nvmf_discovery_remove_referral", 00:07:07.131 "nvmf_discovery_add_referral", 00:07:07.131 "nvmf_subsystem_remove_listener", 00:07:07.131 "nvmf_subsystem_add_listener", 00:07:07.131 "nvmf_delete_subsystem", 00:07:07.131 "nvmf_create_subsystem", 00:07:07.131 "nvmf_get_subsystems", 00:07:07.131 "env_dpdk_get_mem_stats", 00:07:07.131 "nbd_get_disks", 00:07:07.131 "nbd_stop_disk", 00:07:07.131 "nbd_start_disk", 00:07:07.131 "ublk_recover_disk", 00:07:07.131 "ublk_get_disks", 00:07:07.131 "ublk_stop_disk", 00:07:07.131 "ublk_start_disk", 00:07:07.131 "ublk_destroy_target", 00:07:07.131 "ublk_create_target", 00:07:07.131 "virtio_blk_create_transport", 00:07:07.131 "virtio_blk_get_transports", 00:07:07.131 "vhost_controller_set_coalescing", 00:07:07.131 "vhost_get_controllers", 00:07:07.131 "vhost_delete_controller", 00:07:07.131 "vhost_create_blk_controller", 00:07:07.131 "vhost_scsi_controller_remove_target", 00:07:07.131 "vhost_scsi_controller_add_target", 00:07:07.131 "vhost_start_scsi_controller", 00:07:07.131 "vhost_create_scsi_controller", 00:07:07.131 "thread_set_cpumask", 00:07:07.131 "scheduler_set_options", 00:07:07.131 "framework_get_governor", 00:07:07.131 "framework_get_scheduler", 00:07:07.131 "framework_set_scheduler", 00:07:07.131 "framework_get_reactors", 00:07:07.131 "thread_get_io_channels", 00:07:07.131 "thread_get_pollers", 00:07:07.131 "thread_get_stats", 00:07:07.131 "framework_monitor_context_switch", 00:07:07.131 "spdk_kill_instance", 00:07:07.131 "log_enable_timestamps", 00:07:07.131 "log_get_flags", 00:07:07.131 "log_clear_flag", 00:07:07.131 "log_set_flag", 00:07:07.131 "log_get_level", 00:07:07.131 "log_set_level", 00:07:07.131 "log_get_print_level", 00:07:07.131 "log_set_print_level", 00:07:07.131 "framework_enable_cpumask_locks", 00:07:07.131 "framework_disable_cpumask_locks", 00:07:07.131 "framework_wait_init", 00:07:07.131 "framework_start_init", 00:07:07.131 "scsi_get_devices", 00:07:07.131 "bdev_get_histogram", 00:07:07.131 "bdev_enable_histogram", 00:07:07.131 "bdev_set_qos_limit", 00:07:07.131 "bdev_set_qd_sampling_period", 00:07:07.131 "bdev_get_bdevs", 00:07:07.131 "bdev_reset_iostat", 00:07:07.131 "bdev_get_iostat", 00:07:07.131 "bdev_examine", 00:07:07.131 "bdev_wait_for_examine", 00:07:07.131 "bdev_set_options", 00:07:07.131 "accel_get_stats", 00:07:07.131 "accel_set_options", 00:07:07.131 "accel_set_driver", 00:07:07.131 "accel_crypto_key_destroy", 00:07:07.131 "accel_crypto_keys_get", 00:07:07.131 "accel_crypto_key_create", 00:07:07.131 "accel_assign_opc", 00:07:07.131 "accel_get_module_info", 00:07:07.131 "accel_get_opc_assignments", 00:07:07.131 "vmd_rescan", 00:07:07.131 "vmd_remove_device", 00:07:07.131 "vmd_enable", 00:07:07.131 "sock_get_default_impl", 00:07:07.131 "sock_set_default_impl", 00:07:07.131 "sock_impl_set_options", 00:07:07.131 "sock_impl_get_options", 00:07:07.131 "iobuf_get_stats", 00:07:07.131 "iobuf_set_options", 00:07:07.131 "keyring_get_keys", 00:07:07.131 "vfu_tgt_set_base_path", 00:07:07.131 "framework_get_pci_devices", 00:07:07.131 "framework_get_config", 00:07:07.131 "framework_get_subsystems", 00:07:07.131 "fsdev_set_opts", 00:07:07.131 "fsdev_get_opts", 00:07:07.131 "trace_get_info", 00:07:07.132 "trace_get_tpoint_group_mask", 00:07:07.132 "trace_disable_tpoint_group", 00:07:07.132 "trace_enable_tpoint_group", 00:07:07.132 "trace_clear_tpoint_mask", 00:07:07.132 "trace_set_tpoint_mask", 00:07:07.132 "notify_get_notifications", 00:07:07.132 "notify_get_types", 00:07:07.132 "spdk_get_version", 00:07:07.132 "rpc_get_methods" 00:07:07.132 ] 00:07:07.132 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:07.132 02:11:08 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:07.132 02:11:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.132 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:07.132 02:11:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70953 00:07:07.132 02:11:08 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70953 ']' 00:07:07.132 02:11:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70953 00:07:07.132 02:11:08 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:07.132 02:11:08 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.132 02:11:08 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70953 00:07:07.132 killing process with pid 70953 00:07:07.132 02:11:08 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.132 02:11:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.132 02:11:08 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70953' 00:07:07.132 02:11:08 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70953 00:07:07.132 02:11:08 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70953 00:07:07.391 ************************************ 00:07:07.391 END TEST spdkcli_tcp 00:07:07.391 ************************************ 00:07:07.391 00:07:07.391 real 0m1.096s 00:07:07.391 user 0m1.911s 00:07:07.391 sys 0m0.334s 00:07:07.391 02:11:09 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.391 02:11:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.391 02:11:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:07.391 02:11:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.391 02:11:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.391 02:11:09 -- common/autotest_common.sh@10 -- # set +x 00:07:07.391 ************************************ 00:07:07.391 START TEST dpdk_mem_utility 00:07:07.391 ************************************ 00:07:07.391 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:07.391 * Looking for test storage... 00:07:07.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:07.391 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:07.392 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:07:07.392 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:07.651 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.651 02:11:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:07.651 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.651 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:07.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.651 --rc genhtml_branch_coverage=1 00:07:07.651 --rc genhtml_function_coverage=1 00:07:07.651 --rc genhtml_legend=1 00:07:07.651 --rc geninfo_all_blocks=1 00:07:07.651 --rc geninfo_unexecuted_blocks=1 00:07:07.651 00:07:07.651 ' 00:07:07.651 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:07.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.651 --rc genhtml_branch_coverage=1 00:07:07.651 --rc genhtml_function_coverage=1 00:07:07.651 --rc genhtml_legend=1 00:07:07.651 --rc geninfo_all_blocks=1 00:07:07.651 --rc geninfo_unexecuted_blocks=1 00:07:07.651 00:07:07.651 ' 00:07:07.651 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:07.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.651 --rc genhtml_branch_coverage=1 00:07:07.651 --rc genhtml_function_coverage=1 00:07:07.651 --rc genhtml_legend=1 00:07:07.651 --rc geninfo_all_blocks=1 00:07:07.651 --rc geninfo_unexecuted_blocks=1 00:07:07.651 00:07:07.651 ' 00:07:07.651 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:07.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.651 --rc genhtml_branch_coverage=1 00:07:07.651 --rc genhtml_function_coverage=1 00:07:07.651 --rc genhtml_legend=1 00:07:07.651 --rc geninfo_all_blocks=1 00:07:07.651 --rc geninfo_unexecuted_blocks=1 00:07:07.651 00:07:07.651 ' 00:07:07.651 02:11:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:07.651 02:11:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71039 00:07:07.651 02:11:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:07.651 02:11:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71039 00:07:07.651 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 71039 ']' 00:07:07.651 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.651 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.651 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.651 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.651 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:07.651 [2024-11-08 02:11:09.358578] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:07.651 [2024-11-08 02:11:09.358840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71039 ] 00:07:07.651 [2024-11-08 02:11:09.496014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.651 [2024-11-08 02:11:09.528180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.909 [2024-11-08 02:11:09.564943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.909 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.909 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:07.909 02:11:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:07.909 02:11:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:07.909 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.909 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:07.909 { 00:07:07.909 "filename": "/tmp/spdk_mem_dump.txt" 00:07:07.909 } 00:07:07.909 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.909 02:11:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:07.909 DPDK memory size 860.000000 MiB in 1 heap(s) 00:07:07.909 1 heaps totaling size 860.000000 MiB 00:07:07.909 size: 860.000000 MiB heap id: 0 00:07:07.909 end heaps---------- 00:07:07.909 9 mempools totaling size 642.649841 MiB 00:07:07.909 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:07.909 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:07.909 size: 92.545471 MiB name: bdev_io_71039 00:07:07.909 size: 51.011292 MiB name: evtpool_71039 00:07:07.909 size: 50.003479 MiB name: msgpool_71039 00:07:07.909 size: 36.509338 MiB name: fsdev_io_71039 00:07:07.909 size: 21.763794 MiB name: PDU_Pool 00:07:07.909 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:07.909 size: 0.026123 MiB name: Session_Pool 00:07:07.909 end mempools------- 00:07:07.909 6 memzones totaling size 4.142822 MiB 00:07:07.909 size: 1.000366 MiB name: RG_ring_0_71039 00:07:07.909 size: 1.000366 MiB name: RG_ring_1_71039 00:07:07.909 size: 1.000366 MiB name: RG_ring_4_71039 00:07:07.909 size: 1.000366 MiB name: RG_ring_5_71039 00:07:07.909 size: 0.125366 MiB name: RG_ring_2_71039 00:07:07.909 size: 0.015991 MiB name: RG_ring_3_71039 00:07:07.909 end memzones------- 00:07:07.909 02:11:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:08.169 heap id: 0 total size: 860.000000 MiB number of busy elements: 319 number of free elements: 16 00:07:08.169 list of free elements. size: 13.934326 MiB 00:07:08.169 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:08.169 element at address: 0x200000800000 with size: 1.996948 MiB 00:07:08.169 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:07:08.169 element at address: 0x20001be00000 with size: 0.999878 MiB 00:07:08.169 element at address: 0x200034a00000 with size: 0.994446 MiB 00:07:08.169 element at address: 0x200009600000 with size: 0.959839 MiB 00:07:08.169 element at address: 0x200015e00000 with size: 0.954285 MiB 00:07:08.169 element at address: 0x20001c000000 with size: 0.936584 MiB 00:07:08.169 element at address: 0x200000200000 with size: 0.834839 MiB 00:07:08.169 element at address: 0x20001d800000 with size: 0.564758 MiB 00:07:08.169 element at address: 0x200003e00000 with size: 0.489563 MiB 00:07:08.169 element at address: 0x20000d800000 with size: 0.489441 MiB 00:07:08.169 element at address: 0x20001c200000 with size: 0.485657 MiB 00:07:08.169 element at address: 0x200007000000 with size: 0.480469 MiB 00:07:08.169 element at address: 0x20002ac00000 with size: 0.396118 MiB 00:07:08.169 element at address: 0x200003a00000 with size: 0.352112 MiB 00:07:08.169 list of standard malloc elements. size: 199.268982 MiB 00:07:08.169 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:07:08.169 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:07:08.169 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:07:08.169 element at address: 0x20001befff80 with size: 1.000122 MiB 00:07:08.169 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:07:08.169 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:08.169 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:07:08.169 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:08.169 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:07:08.169 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:08.169 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a5a240 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a5e700 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7e9c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7ea80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003aff880 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000707b000 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000707b180 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000707b240 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000707b300 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000707b480 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000707b540 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000707b600 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:07:08.170 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d890940 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d890a00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d890ac0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d890b80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d890c40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d890d00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d890dc0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d890e80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d890f40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891000 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d8910c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891180 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891240 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891300 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891480 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891540 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891600 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891780 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891840 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891900 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892080 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892140 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892200 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892380 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892440 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892500 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892680 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892740 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892800 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892980 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d893040 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d893100 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d893280 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d893340 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d893400 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d893580 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d893640 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d893700 with size: 0.000183 MiB 00:07:08.170 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d893880 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d893940 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894000 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894180 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894240 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894300 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894480 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894540 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894600 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894780 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894840 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894900 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d895080 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d895140 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d895200 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d895380 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20001d895440 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac65680 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac65740 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6c340 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:07:08.171 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:07:08.171 list of memzone associated elements. size: 646.796692 MiB 00:07:08.171 element at address: 0x20001d895500 with size: 211.416748 MiB 00:07:08.171 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:08.171 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:07:08.171 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:08.171 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:07:08.171 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71039_0 00:07:08.171 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:08.171 associated memzone info: size: 48.002930 MiB name: MP_evtpool_71039_0 00:07:08.171 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:08.171 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71039_0 00:07:08.171 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:07:08.171 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71039_0 00:07:08.171 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:07:08.171 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:08.171 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:07:08.171 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:08.171 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:08.171 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_71039 00:07:08.171 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:08.171 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71039 00:07:08.171 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:08.171 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71039 00:07:08.171 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:07:08.171 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:08.171 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:07:08.172 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:08.172 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:07:08.172 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:08.172 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:07:08.172 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:08.172 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:08.172 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71039 00:07:08.172 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:08.172 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71039 00:07:08.172 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:07:08.172 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71039 00:07:08.172 element at address: 0x200034afe940 with size: 1.000488 MiB 00:07:08.172 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71039 00:07:08.172 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:07:08.172 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71039 00:07:08.172 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:07:08.172 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71039 00:07:08.172 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:07:08.172 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:08.172 element at address: 0x20000707b780 with size: 0.500488 MiB 00:07:08.172 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:08.172 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:07:08.172 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:08.172 element at address: 0x200003a5e7c0 with size: 0.125488 MiB 00:07:08.172 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71039 00:07:08.172 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:07:08.172 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:08.172 element at address: 0x20002ac65800 with size: 0.023743 MiB 00:07:08.172 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:08.172 element at address: 0x200003a5a500 with size: 0.016113 MiB 00:07:08.172 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71039 00:07:08.172 element at address: 0x20002ac6b940 with size: 0.002441 MiB 00:07:08.172 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:08.172 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:07:08.172 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71039 00:07:08.172 element at address: 0x200003aff940 with size: 0.000305 MiB 00:07:08.172 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71039 00:07:08.172 element at address: 0x200003a5a300 with size: 0.000305 MiB 00:07:08.172 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71039 00:07:08.172 element at address: 0x20002ac6c400 with size: 0.000305 MiB 00:07:08.172 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:08.172 02:11:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:08.172 02:11:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71039 00:07:08.172 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 71039 ']' 00:07:08.172 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 71039 00:07:08.172 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:08.172 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.172 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71039 00:07:08.172 killing process with pid 71039 00:07:08.172 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.172 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.172 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71039' 00:07:08.172 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 71039 00:07:08.172 02:11:09 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 71039 00:07:08.431 ************************************ 00:07:08.431 END TEST dpdk_mem_utility 00:07:08.431 ************************************ 00:07:08.431 00:07:08.431 real 0m0.927s 00:07:08.431 user 0m0.983s 00:07:08.431 sys 0m0.296s 00:07:08.431 02:11:10 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.431 02:11:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:08.431 02:11:10 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:08.431 02:11:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.431 02:11:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.431 02:11:10 -- common/autotest_common.sh@10 -- # set +x 00:07:08.431 ************************************ 00:07:08.431 START TEST event 00:07:08.431 ************************************ 00:07:08.431 02:11:10 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:08.431 * Looking for test storage... 00:07:08.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:08.431 02:11:10 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.431 02:11:10 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.431 02:11:10 event -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.431 02:11:10 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.431 02:11:10 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.431 02:11:10 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.431 02:11:10 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.431 02:11:10 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.431 02:11:10 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.431 02:11:10 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.431 02:11:10 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.431 02:11:10 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.431 02:11:10 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.431 02:11:10 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.431 02:11:10 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.431 02:11:10 event -- scripts/common.sh@344 -- # case "$op" in 00:07:08.431 02:11:10 event -- scripts/common.sh@345 -- # : 1 00:07:08.431 02:11:10 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.431 02:11:10 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.431 02:11:10 event -- scripts/common.sh@365 -- # decimal 1 00:07:08.431 02:11:10 event -- scripts/common.sh@353 -- # local d=1 00:07:08.431 02:11:10 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.431 02:11:10 event -- scripts/common.sh@355 -- # echo 1 00:07:08.431 02:11:10 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.431 02:11:10 event -- scripts/common.sh@366 -- # decimal 2 00:07:08.431 02:11:10 event -- scripts/common.sh@353 -- # local d=2 00:07:08.431 02:11:10 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.431 02:11:10 event -- scripts/common.sh@355 -- # echo 2 00:07:08.431 02:11:10 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.431 02:11:10 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.431 02:11:10 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.431 02:11:10 event -- scripts/common.sh@368 -- # return 0 00:07:08.431 02:11:10 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.431 02:11:10 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.431 --rc genhtml_branch_coverage=1 00:07:08.431 --rc genhtml_function_coverage=1 00:07:08.431 --rc genhtml_legend=1 00:07:08.431 --rc geninfo_all_blocks=1 00:07:08.431 --rc geninfo_unexecuted_blocks=1 00:07:08.431 00:07:08.431 ' 00:07:08.431 02:11:10 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.431 --rc genhtml_branch_coverage=1 00:07:08.431 --rc genhtml_function_coverage=1 00:07:08.431 --rc genhtml_legend=1 00:07:08.431 --rc geninfo_all_blocks=1 00:07:08.432 --rc geninfo_unexecuted_blocks=1 00:07:08.432 00:07:08.432 ' 00:07:08.432 02:11:10 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.432 --rc genhtml_branch_coverage=1 00:07:08.432 --rc genhtml_function_coverage=1 00:07:08.432 --rc genhtml_legend=1 00:07:08.432 --rc geninfo_all_blocks=1 00:07:08.432 --rc geninfo_unexecuted_blocks=1 00:07:08.432 00:07:08.432 ' 00:07:08.432 02:11:10 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.432 --rc genhtml_branch_coverage=1 00:07:08.432 --rc genhtml_function_coverage=1 00:07:08.432 --rc genhtml_legend=1 00:07:08.432 --rc geninfo_all_blocks=1 00:07:08.432 --rc geninfo_unexecuted_blocks=1 00:07:08.432 00:07:08.432 ' 00:07:08.432 02:11:10 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:08.432 02:11:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:08.432 02:11:10 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:08.432 02:11:10 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:08.432 02:11:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.432 02:11:10 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.432 ************************************ 00:07:08.432 START TEST event_perf 00:07:08.432 ************************************ 00:07:08.432 02:11:10 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:08.432 Running I/O for 1 seconds...[2024-11-08 02:11:10.311704] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:08.432 [2024-11-08 02:11:10.312008] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71111 ] 00:07:08.691 [2024-11-08 02:11:10.446132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.691 [2024-11-08 02:11:10.480714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.691 [2024-11-08 02:11:10.480814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.691 [2024-11-08 02:11:10.480949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.691 [2024-11-08 02:11:10.480952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.066 Running I/O for 1 seconds... 00:07:10.066 lcore 0: 198445 00:07:10.066 lcore 1: 198445 00:07:10.066 lcore 2: 198446 00:07:10.066 lcore 3: 198447 00:07:10.066 done. 00:07:10.066 ************************************ 00:07:10.066 END TEST event_perf 00:07:10.066 ************************************ 00:07:10.066 00:07:10.066 real 0m1.238s 00:07:10.066 user 0m4.068s 00:07:10.066 sys 0m0.051s 00:07:10.066 02:11:11 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.066 02:11:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.066 02:11:11 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:10.066 02:11:11 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:10.066 02:11:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.066 02:11:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.066 ************************************ 00:07:10.066 START TEST event_reactor 00:07:10.066 ************************************ 00:07:10.066 02:11:11 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:10.066 [2024-11-08 02:11:11.591553] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:10.066 [2024-11-08 02:11:11.591806] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71149 ] 00:07:10.066 [2024-11-08 02:11:11.722837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.066 [2024-11-08 02:11:11.754820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.002 test_start 00:07:11.002 oneshot 00:07:11.002 tick 100 00:07:11.002 tick 100 00:07:11.002 tick 250 00:07:11.002 tick 100 00:07:11.002 tick 100 00:07:11.002 tick 100 00:07:11.002 tick 250 00:07:11.002 tick 500 00:07:11.002 tick 100 00:07:11.002 tick 100 00:07:11.002 tick 250 00:07:11.002 tick 100 00:07:11.002 tick 100 00:07:11.002 test_end 00:07:11.002 00:07:11.002 real 0m1.225s 00:07:11.002 user 0m1.085s 00:07:11.002 sys 0m0.035s 00:07:11.002 02:11:12 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.002 ************************************ 00:07:11.002 END TEST event_reactor 00:07:11.002 ************************************ 00:07:11.002 02:11:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 02:11:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:11.002 02:11:12 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:11.002 02:11:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.002 02:11:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.002 ************************************ 00:07:11.002 START TEST event_reactor_perf 00:07:11.002 ************************************ 00:07:11.002 02:11:12 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:11.002 [2024-11-08 02:11:12.868632] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:11.002 [2024-11-08 02:11:12.868882] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71185 ] 00:07:11.264 [2024-11-08 02:11:13.001755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.264 [2024-11-08 02:11:13.032625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.205 test_start 00:07:12.205 test_end 00:07:12.205 Performance: 456952 events per second 00:07:12.205 00:07:12.205 real 0m1.228s 00:07:12.205 user 0m1.088s 00:07:12.205 sys 0m0.036s 00:07:12.205 ************************************ 00:07:12.205 END TEST event_reactor_perf 00:07:12.205 ************************************ 00:07:12.205 02:11:14 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.205 02:11:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.464 02:11:14 event -- event/event.sh@49 -- # uname -s 00:07:12.464 02:11:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:12.464 02:11:14 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:12.464 02:11:14 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.464 02:11:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.464 02:11:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:12.464 ************************************ 00:07:12.464 START TEST event_scheduler 00:07:12.464 ************************************ 00:07:12.464 02:11:14 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:12.464 * Looking for test storage... 00:07:12.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:12.464 02:11:14 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:12.464 02:11:14 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:07:12.464 02:11:14 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:12.723 02:11:14 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.723 02:11:14 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:12.723 02:11:14 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.723 02:11:14 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:12.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.723 --rc genhtml_branch_coverage=1 00:07:12.723 --rc genhtml_function_coverage=1 00:07:12.723 --rc genhtml_legend=1 00:07:12.723 --rc geninfo_all_blocks=1 00:07:12.723 --rc geninfo_unexecuted_blocks=1 00:07:12.723 00:07:12.723 ' 00:07:12.723 02:11:14 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:12.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.723 --rc genhtml_branch_coverage=1 00:07:12.723 --rc genhtml_function_coverage=1 00:07:12.723 --rc genhtml_legend=1 00:07:12.723 --rc geninfo_all_blocks=1 00:07:12.723 --rc geninfo_unexecuted_blocks=1 00:07:12.723 00:07:12.723 ' 00:07:12.723 02:11:14 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:12.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.723 --rc genhtml_branch_coverage=1 00:07:12.723 --rc genhtml_function_coverage=1 00:07:12.723 --rc genhtml_legend=1 00:07:12.723 --rc geninfo_all_blocks=1 00:07:12.723 --rc geninfo_unexecuted_blocks=1 00:07:12.723 00:07:12.723 ' 00:07:12.723 02:11:14 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:12.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.723 --rc genhtml_branch_coverage=1 00:07:12.723 --rc genhtml_function_coverage=1 00:07:12.723 --rc genhtml_legend=1 00:07:12.723 --rc geninfo_all_blocks=1 00:07:12.723 --rc geninfo_unexecuted_blocks=1 00:07:12.723 00:07:12.723 ' 00:07:12.723 02:11:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:12.723 02:11:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71249 00:07:12.723 02:11:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:12.723 02:11:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:12.723 02:11:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71249 00:07:12.723 02:11:14 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 71249 ']' 00:07:12.723 02:11:14 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.723 02:11:14 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.723 02:11:14 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.723 02:11:14 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.723 02:11:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.723 [2024-11-08 02:11:14.430308] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:12.723 [2024-11-08 02:11:14.430414] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71249 ] 00:07:12.723 [2024-11-08 02:11:14.572012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.983 [2024-11-08 02:11:14.616412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.983 [2024-11-08 02:11:14.616457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.983 [2024-11-08 02:11:14.616574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.983 [2024-11-08 02:11:14.616582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.983 02:11:14 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.983 02:11:14 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:12.983 02:11:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:12.983 02:11:14 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.983 02:11:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.983 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.983 POWER: Cannot set governor of lcore 0 to userspace 00:07:12.983 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.983 POWER: Cannot set governor of lcore 0 to performance 00:07:12.983 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.983 POWER: Cannot set governor of lcore 0 to userspace 00:07:12.983 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:12.983 POWER: Cannot set governor of lcore 0 to userspace 00:07:12.983 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:12.983 POWER: Unable to set Power Management Environment for lcore 0 00:07:12.983 [2024-11-08 02:11:14.690572] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:12.983 [2024-11-08 02:11:14.690587] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:12.983 [2024-11-08 02:11:14.690603] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:12.983 [2024-11-08 02:11:14.690618] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:12.983 [2024-11-08 02:11:14.690627] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:12.983 [2024-11-08 02:11:14.690636] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:12.983 02:11:14 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.983 02:11:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:12.983 02:11:14 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.983 02:11:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.983 [2024-11-08 02:11:14.729967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.983 [2024-11-08 02:11:14.748041] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:12.983 02:11:14 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.983 02:11:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:12.983 02:11:14 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.983 02:11:14 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.983 02:11:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.983 ************************************ 00:07:12.983 START TEST scheduler_create_thread 00:07:12.983 ************************************ 00:07:12.983 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:12.983 02:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:12.983 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.983 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.983 2 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.984 3 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.984 4 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.984 5 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.984 6 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.984 7 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.984 8 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.984 9 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.984 10 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.984 02:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.552 02:11:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.552 02:11:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:13.552 02:11:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.552 02:11:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:14.929 02:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.929 02:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:14.929 02:11:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:14.929 02:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.929 02:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.305 ************************************ 00:07:16.305 END TEST scheduler_create_thread 00:07:16.305 ************************************ 00:07:16.305 02:11:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.305 00:07:16.305 real 0m3.091s 00:07:16.305 user 0m0.020s 00:07:16.305 sys 0m0.006s 00:07:16.305 02:11:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.305 02:11:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:16.305 02:11:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:16.305 02:11:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71249 00:07:16.305 02:11:17 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 71249 ']' 00:07:16.305 02:11:17 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 71249 00:07:16.305 02:11:17 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:16.305 02:11:17 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.305 02:11:17 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71249 00:07:16.305 killing process with pid 71249 00:07:16.305 02:11:17 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:16.305 02:11:17 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:16.305 02:11:17 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71249' 00:07:16.305 02:11:17 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 71249 00:07:16.305 02:11:17 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 71249 00:07:16.564 [2024-11-08 02:11:18.231564] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:16.564 00:07:16.564 real 0m4.260s 00:07:16.564 user 0m6.664s 00:07:16.564 sys 0m0.316s 00:07:16.564 ************************************ 00:07:16.564 END TEST event_scheduler 00:07:16.564 ************************************ 00:07:16.564 02:11:18 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.564 02:11:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:16.564 02:11:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:16.564 02:11:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:16.564 02:11:18 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.564 02:11:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.564 02:11:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:16.823 ************************************ 00:07:16.823 START TEST app_repeat 00:07:16.823 ************************************ 00:07:16.823 02:11:18 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:16.823 Process app_repeat pid: 71341 00:07:16.823 spdk_app_start Round 0 00:07:16.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71341 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71341' 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:16.823 02:11:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71341 /var/tmp/spdk-nbd.sock 00:07:16.823 02:11:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71341 ']' 00:07:16.823 02:11:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.823 02:11:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.823 02:11:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.823 02:11:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.823 02:11:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.823 [2024-11-08 02:11:18.479381] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:16.823 [2024-11-08 02:11:18.479670] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71341 ] 00:07:16.823 [2024-11-08 02:11:18.606447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.823 [2024-11-08 02:11:18.639820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.823 [2024-11-08 02:11:18.639827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.823 [2024-11-08 02:11:18.667257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.086 02:11:18 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.086 02:11:18 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:17.086 02:11:18 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.357 Malloc0 00:07:17.357 02:11:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.652 Malloc1 00:07:17.652 02:11:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.652 02:11:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:17.918 /dev/nbd0 00:07:17.918 02:11:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.918 02:11:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.918 1+0 records in 00:07:17.918 1+0 records out 00:07:17.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026252 s, 15.6 MB/s 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.918 02:11:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:17.918 02:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.918 02:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.918 02:11:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:18.181 /dev/nbd1 00:07:18.181 02:11:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:18.181 02:11:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:18.181 1+0 records in 00:07:18.181 1+0 records out 00:07:18.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278996 s, 14.7 MB/s 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.181 02:11:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:18.181 02:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.181 02:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.181 02:11:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.181 02:11:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.181 02:11:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.439 02:11:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.439 { 00:07:18.439 "nbd_device": "/dev/nbd0", 00:07:18.440 "bdev_name": "Malloc0" 00:07:18.440 }, 00:07:18.440 { 00:07:18.440 "nbd_device": "/dev/nbd1", 00:07:18.440 "bdev_name": "Malloc1" 00:07:18.440 } 00:07:18.440 ]' 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.440 { 00:07:18.440 "nbd_device": "/dev/nbd0", 00:07:18.440 "bdev_name": "Malloc0" 00:07:18.440 }, 00:07:18.440 { 00:07:18.440 "nbd_device": "/dev/nbd1", 00:07:18.440 "bdev_name": "Malloc1" 00:07:18.440 } 00:07:18.440 ]' 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.440 /dev/nbd1' 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.440 /dev/nbd1' 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.440 256+0 records in 00:07:18.440 256+0 records out 00:07:18.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00888498 s, 118 MB/s 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.440 256+0 records in 00:07:18.440 256+0 records out 00:07:18.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210289 s, 49.9 MB/s 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.440 256+0 records in 00:07:18.440 256+0 records out 00:07:18.440 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239505 s, 43.8 MB/s 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.440 02:11:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.699 02:11:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.699 02:11:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.699 02:11:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.699 02:11:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.699 02:11:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.699 02:11:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.699 02:11:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.699 02:11:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.699 02:11:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.699 02:11:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.957 02:11:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.957 02:11:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.957 02:11:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.957 02:11:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.957 02:11:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.957 02:11:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.957 02:11:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.957 02:11:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.957 02:11:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.957 02:11:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.957 02:11:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.216 02:11:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.216 02:11:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.216 02:11:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.474 02:11:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.474 02:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.474 02:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.474 02:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:19.474 02:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.474 02:11:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.474 02:11:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:19.474 02:11:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:19.474 02:11:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:19.474 02:11:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:19.732 02:11:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:19.732 [2024-11-08 02:11:21.554666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.732 [2024-11-08 02:11:21.588869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.732 [2024-11-08 02:11:21.588879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.992 [2024-11-08 02:11:21.616623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.992 [2024-11-08 02:11:21.616730] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:19.992 [2024-11-08 02:11:21.616744] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:23.279 spdk_app_start Round 1 00:07:23.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:23.279 02:11:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:23.279 02:11:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:23.279 02:11:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71341 /var/tmp/spdk-nbd.sock 00:07:23.279 02:11:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71341 ']' 00:07:23.279 02:11:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:23.279 02:11:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.279 02:11:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:23.279 02:11:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.279 02:11:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:23.279 02:11:24 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.279 02:11:24 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:23.279 02:11:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:23.279 Malloc0 00:07:23.279 02:11:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:23.538 Malloc1 00:07:23.538 02:11:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.538 02:11:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:23.796 /dev/nbd0 00:07:23.796 02:11:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:23.796 02:11:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:23.796 1+0 records in 00:07:23.796 1+0 records out 00:07:23.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305265 s, 13.4 MB/s 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:23.796 02:11:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:23.796 02:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.796 02:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.796 02:11:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:24.055 /dev/nbd1 00:07:24.055 02:11:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:24.055 02:11:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:24.055 1+0 records in 00:07:24.055 1+0 records out 00:07:24.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289827 s, 14.1 MB/s 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:24.055 02:11:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:24.055 02:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.055 02:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.055 02:11:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.055 02:11:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.055 02:11:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.314 { 00:07:24.314 "nbd_device": "/dev/nbd0", 00:07:24.314 "bdev_name": "Malloc0" 00:07:24.314 }, 00:07:24.314 { 00:07:24.314 "nbd_device": "/dev/nbd1", 00:07:24.314 "bdev_name": "Malloc1" 00:07:24.314 } 00:07:24.314 ]' 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.314 { 00:07:24.314 "nbd_device": "/dev/nbd0", 00:07:24.314 "bdev_name": "Malloc0" 00:07:24.314 }, 00:07:24.314 { 00:07:24.314 "nbd_device": "/dev/nbd1", 00:07:24.314 "bdev_name": "Malloc1" 00:07:24.314 } 00:07:24.314 ]' 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:24.314 /dev/nbd1' 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:24.314 /dev/nbd1' 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:24.314 256+0 records in 00:07:24.314 256+0 records out 00:07:24.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00855505 s, 123 MB/s 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.314 02:11:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:24.573 256+0 records in 00:07:24.573 256+0 records out 00:07:24.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231697 s, 45.3 MB/s 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:24.573 256+0 records in 00:07:24.573 256+0 records out 00:07:24.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024641 s, 42.6 MB/s 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:24.573 02:11:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.574 02:11:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.574 02:11:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.574 02:11:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:24.574 02:11:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.574 02:11:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:24.833 02:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:24.833 02:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:24.833 02:11:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:24.833 02:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:24.833 02:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:24.833 02:11:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:24.833 02:11:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:24.833 02:11:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:24.833 02:11:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.833 02:11:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:25.092 02:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:25.092 02:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:25.092 02:11:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:25.092 02:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.092 02:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.092 02:11:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:25.092 02:11:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:25.092 02:11:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.092 02:11:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.092 02:11:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.092 02:11:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.350 02:11:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:25.350 02:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.350 02:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:25.350 02:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:25.350 02:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:25.350 02:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.350 02:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:25.350 02:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:25.350 02:11:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:25.350 02:11:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:25.350 02:11:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:25.350 02:11:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:25.350 02:11:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:25.609 02:11:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:25.868 [2024-11-08 02:11:27.545785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:25.868 [2024-11-08 02:11:27.576722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.868 [2024-11-08 02:11:27.576733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.868 [2024-11-08 02:11:27.604137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.868 [2024-11-08 02:11:27.604253] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:25.868 [2024-11-08 02:11:27.604266] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:29.154 02:11:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:29.154 spdk_app_start Round 2 00:07:29.154 02:11:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:29.154 02:11:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71341 /var/tmp/spdk-nbd.sock 00:07:29.154 02:11:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71341 ']' 00:07:29.154 02:11:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:29.154 02:11:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:29.154 02:11:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:29.154 02:11:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.154 02:11:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:29.154 02:11:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.154 02:11:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:29.154 02:11:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:29.154 Malloc0 00:07:29.154 02:11:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:29.413 Malloc1 00:07:29.413 02:11:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.413 02:11:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:29.671 /dev/nbd0 00:07:29.671 02:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:29.671 02:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:29.671 02:11:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:29.672 02:11:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:29.672 02:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.672 02:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.672 02:11:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:29.672 02:11:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:29.672 02:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.672 02:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.672 02:11:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:29.672 1+0 records in 00:07:29.672 1+0 records out 00:07:29.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188818 s, 21.7 MB/s 00:07:29.672 02:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.672 02:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:29.672 02:11:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.672 02:11:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.672 02:11:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:29.672 02:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.672 02:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.672 02:11:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:29.929 /dev/nbd1 00:07:29.929 02:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:29.929 02:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:29.929 1+0 records in 00:07:29.929 1+0 records out 00:07:29.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333937 s, 12.3 MB/s 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.929 02:11:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:29.929 02:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.929 02:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:29.929 02:11:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:29.929 02:11:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.929 02:11:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:30.494 { 00:07:30.494 "nbd_device": "/dev/nbd0", 00:07:30.494 "bdev_name": "Malloc0" 00:07:30.494 }, 00:07:30.494 { 00:07:30.494 "nbd_device": "/dev/nbd1", 00:07:30.494 "bdev_name": "Malloc1" 00:07:30.494 } 00:07:30.494 ]' 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:30.494 { 00:07:30.494 "nbd_device": "/dev/nbd0", 00:07:30.494 "bdev_name": "Malloc0" 00:07:30.494 }, 00:07:30.494 { 00:07:30.494 "nbd_device": "/dev/nbd1", 00:07:30.494 "bdev_name": "Malloc1" 00:07:30.494 } 00:07:30.494 ]' 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:30.494 /dev/nbd1' 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:30.494 /dev/nbd1' 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:30.494 256+0 records in 00:07:30.494 256+0 records out 00:07:30.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0077727 s, 135 MB/s 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.494 02:11:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:30.494 256+0 records in 00:07:30.494 256+0 records out 00:07:30.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294308 s, 35.6 MB/s 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:30.495 256+0 records in 00:07:30.495 256+0 records out 00:07:30.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231882 s, 45.2 MB/s 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.495 02:11:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:30.753 02:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:30.753 02:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:30.753 02:11:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:30.753 02:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.753 02:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.753 02:11:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:30.753 02:11:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:30.753 02:11:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.753 02:11:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.753 02:11:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:31.011 02:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:31.011 02:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:31.011 02:11:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:31.011 02:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.011 02:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.011 02:11:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:31.011 02:11:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:31.011 02:11:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.011 02:11:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:31.011 02:11:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.011 02:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:31.271 02:11:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:31.271 02:11:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:31.271 02:11:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:31.271 02:11:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:31.271 02:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:31.271 02:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:31.271 02:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:31.271 02:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:31.271 02:11:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:31.271 02:11:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:31.271 02:11:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:31.271 02:11:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:31.271 02:11:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:31.530 02:11:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:31.788 [2024-11-08 02:11:33.494640] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:31.788 [2024-11-08 02:11:33.525794] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.788 [2024-11-08 02:11:33.525804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.788 [2024-11-08 02:11:33.552667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.788 [2024-11-08 02:11:33.552754] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:31.788 [2024-11-08 02:11:33.552766] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:35.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:35.073 02:11:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71341 /var/tmp/spdk-nbd.sock 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71341 ']' 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:35.073 02:11:36 event.app_repeat -- event/event.sh@39 -- # killprocess 71341 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71341 ']' 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71341 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71341 00:07:35.073 killing process with pid 71341 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71341' 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71341 00:07:35.073 02:11:36 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71341 00:07:35.073 spdk_app_start is called in Round 0. 00:07:35.073 Shutdown signal received, stop current app iteration 00:07:35.073 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:35.073 spdk_app_start is called in Round 1. 00:07:35.073 Shutdown signal received, stop current app iteration 00:07:35.073 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:35.073 spdk_app_start is called in Round 2. 00:07:35.073 Shutdown signal received, stop current app iteration 00:07:35.073 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:35.073 spdk_app_start is called in Round 3. 00:07:35.073 Shutdown signal received, stop current app iteration 00:07:35.074 ************************************ 00:07:35.074 END TEST app_repeat 00:07:35.074 ************************************ 00:07:35.074 02:11:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:35.074 02:11:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:35.074 00:07:35.074 real 0m18.390s 00:07:35.074 user 0m42.298s 00:07:35.074 sys 0m2.401s 00:07:35.074 02:11:36 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.074 02:11:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:35.074 02:11:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:35.074 02:11:36 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:35.074 02:11:36 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.074 02:11:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.074 02:11:36 event -- common/autotest_common.sh@10 -- # set +x 00:07:35.074 ************************************ 00:07:35.074 START TEST cpu_locks 00:07:35.074 ************************************ 00:07:35.074 02:11:36 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:35.333 * Looking for test storage... 00:07:35.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:35.333 02:11:36 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:35.333 02:11:36 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:35.333 02:11:36 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:35.333 02:11:37 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.333 02:11:37 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:35.333 02:11:37 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.333 02:11:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:35.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.333 --rc genhtml_branch_coverage=1 00:07:35.333 --rc genhtml_function_coverage=1 00:07:35.333 --rc genhtml_legend=1 00:07:35.333 --rc geninfo_all_blocks=1 00:07:35.333 --rc geninfo_unexecuted_blocks=1 00:07:35.333 00:07:35.333 ' 00:07:35.333 02:11:37 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:35.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.333 --rc genhtml_branch_coverage=1 00:07:35.333 --rc genhtml_function_coverage=1 00:07:35.333 --rc genhtml_legend=1 00:07:35.333 --rc geninfo_all_blocks=1 00:07:35.333 --rc geninfo_unexecuted_blocks=1 00:07:35.333 00:07:35.333 ' 00:07:35.333 02:11:37 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:35.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.333 --rc genhtml_branch_coverage=1 00:07:35.333 --rc genhtml_function_coverage=1 00:07:35.333 --rc genhtml_legend=1 00:07:35.333 --rc geninfo_all_blocks=1 00:07:35.333 --rc geninfo_unexecuted_blocks=1 00:07:35.333 00:07:35.333 ' 00:07:35.333 02:11:37 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:35.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.333 --rc genhtml_branch_coverage=1 00:07:35.333 --rc genhtml_function_coverage=1 00:07:35.333 --rc genhtml_legend=1 00:07:35.333 --rc geninfo_all_blocks=1 00:07:35.334 --rc geninfo_unexecuted_blocks=1 00:07:35.334 00:07:35.334 ' 00:07:35.334 02:11:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:35.334 02:11:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:35.334 02:11:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:35.334 02:11:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:35.334 02:11:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.334 02:11:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.334 02:11:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.334 ************************************ 00:07:35.334 START TEST default_locks 00:07:35.334 ************************************ 00:07:35.334 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:35.334 02:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71778 00:07:35.334 02:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71778 00:07:35.334 02:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:35.334 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71778 ']' 00:07:35.334 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.334 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.334 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.334 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.334 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.334 [2024-11-08 02:11:37.168569] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:35.334 [2024-11-08 02:11:37.168690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71778 ] 00:07:35.592 [2024-11-08 02:11:37.308553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.592 [2024-11-08 02:11:37.343692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.592 [2024-11-08 02:11:37.380059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.851 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.851 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:35.851 02:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71778 00:07:35.851 02:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71778 00:07:35.851 02:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:36.110 02:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71778 00:07:36.110 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71778 ']' 00:07:36.110 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71778 00:07:36.110 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:36.110 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.110 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71778 00:07:36.110 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.110 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.110 killing process with pid 71778 00:07:36.110 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71778' 00:07:36.110 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71778 00:07:36.110 02:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71778 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71778 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71778 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71778 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71778 ']' 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.369 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71778) - No such process 00:07:36.369 ERROR: process (pid: 71778) is no longer running 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:36.369 00:07:36.369 real 0m1.109s 00:07:36.369 user 0m1.185s 00:07:36.369 sys 0m0.460s 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.369 02:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.369 ************************************ 00:07:36.369 END TEST default_locks 00:07:36.369 ************************************ 00:07:36.369 02:11:38 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:36.369 02:11:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.369 02:11:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.369 02:11:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.629 ************************************ 00:07:36.629 START TEST default_locks_via_rpc 00:07:36.629 ************************************ 00:07:36.629 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:36.629 02:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71819 00:07:36.629 02:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71819 00:07:36.629 02:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:36.629 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71819 ']' 00:07:36.629 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.629 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.629 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.629 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.629 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.629 [2024-11-08 02:11:38.321928] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:36.629 [2024-11-08 02:11:38.322021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71819 ] 00:07:36.629 [2024-11-08 02:11:38.457220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.629 [2024-11-08 02:11:38.488857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.888 [2024-11-08 02:11:38.522684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71819 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71819 00:07:36.888 02:11:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:37.455 02:11:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71819 00:07:37.455 02:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71819 ']' 00:07:37.455 02:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71819 00:07:37.455 02:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:37.455 02:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.455 02:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71819 00:07:37.455 02:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.455 killing process with pid 71819 00:07:37.455 02:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.455 02:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71819' 00:07:37.455 02:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71819 00:07:37.455 02:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71819 00:07:37.714 00:07:37.714 real 0m1.120s 00:07:37.714 user 0m1.216s 00:07:37.714 sys 0m0.421s 00:07:37.714 02:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.714 02:11:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 ************************************ 00:07:37.714 END TEST default_locks_via_rpc 00:07:37.714 ************************************ 00:07:37.714 02:11:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:37.714 02:11:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.714 02:11:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.714 02:11:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 ************************************ 00:07:37.714 START TEST non_locking_app_on_locked_coremask 00:07:37.714 ************************************ 00:07:37.714 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:37.714 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71863 00:07:37.714 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71863 /var/tmp/spdk.sock 00:07:37.714 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:37.714 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71863 ']' 00:07:37.714 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.714 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.714 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.714 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.714 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.714 [2024-11-08 02:11:39.484129] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:37.714 [2024-11-08 02:11:39.484237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71863 ] 00:07:37.976 [2024-11-08 02:11:39.613077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.976 [2024-11-08 02:11:39.644900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.976 [2024-11-08 02:11:39.679004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.976 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.976 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:37.976 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71871 00:07:37.976 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71871 /var/tmp/spdk2.sock 00:07:37.976 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71871 ']' 00:07:37.976 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.976 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.976 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.976 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.976 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.976 02:11:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:38.234 [2024-11-08 02:11:39.897905] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:38.234 [2024-11-08 02:11:39.898010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71871 ] 00:07:38.234 [2024-11-08 02:11:40.037110] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:38.234 [2024-11-08 02:11:40.037174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.234 [2024-11-08 02:11:40.102901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.493 [2024-11-08 02:11:40.169796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.493 02:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.493 02:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:38.493 02:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71863 00:07:38.493 02:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71863 00:07:38.493 02:11:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:39.430 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71863 00:07:39.430 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71863 ']' 00:07:39.430 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71863 00:07:39.430 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:39.430 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.430 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71863 00:07:39.430 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.430 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.430 killing process with pid 71863 00:07:39.430 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71863' 00:07:39.430 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71863 00:07:39.430 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71863 00:07:39.998 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71871 00:07:39.998 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71871 ']' 00:07:39.998 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71871 00:07:39.998 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:39.998 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.998 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71871 00:07:39.998 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.998 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.998 killing process with pid 71871 00:07:39.998 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71871' 00:07:39.998 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71871 00:07:39.998 02:11:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71871 00:07:40.258 00:07:40.258 real 0m2.584s 00:07:40.258 user 0m2.986s 00:07:40.258 sys 0m0.915s 00:07:40.258 02:11:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.258 02:11:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.258 ************************************ 00:07:40.258 END TEST non_locking_app_on_locked_coremask 00:07:40.258 ************************************ 00:07:40.258 02:11:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:40.258 02:11:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.258 02:11:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.258 02:11:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.258 ************************************ 00:07:40.258 START TEST locking_app_on_unlocked_coremask 00:07:40.258 ************************************ 00:07:40.258 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:40.258 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71926 00:07:40.258 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71926 /var/tmp/spdk.sock 00:07:40.258 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:40.258 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71926 ']' 00:07:40.258 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.258 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.258 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.258 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.258 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.258 [2024-11-08 02:11:42.137224] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.258 [2024-11-08 02:11:42.137323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71926 ] 00:07:40.517 [2024-11-08 02:11:42.276279] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:40.517 [2024-11-08 02:11:42.276327] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.517 [2024-11-08 02:11:42.308449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.517 [2024-11-08 02:11:42.343750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.776 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.776 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:40.776 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71935 00:07:40.776 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:40.776 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71935 /var/tmp/spdk2.sock 00:07:40.776 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71935 ']' 00:07:40.776 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.776 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.776 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.776 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.776 02:11:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.776 [2024-11-08 02:11:42.526511] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.776 [2024-11-08 02:11:42.526796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71935 ] 00:07:41.036 [2024-11-08 02:11:42.669650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.036 [2024-11-08 02:11:42.735822] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.036 [2024-11-08 02:11:42.809057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.295 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.295 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:41.295 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71935 00:07:41.295 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71935 00:07:41.295 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:42.233 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71926 00:07:42.233 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71926 ']' 00:07:42.233 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71926 00:07:42.233 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:42.233 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.234 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71926 00:07:42.234 killing process with pid 71926 00:07:42.234 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.234 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.234 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71926' 00:07:42.234 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71926 00:07:42.234 02:11:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71926 00:07:42.803 02:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71935 00:07:42.803 02:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71935 ']' 00:07:42.803 02:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71935 00:07:42.803 02:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:42.803 02:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.803 02:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71935 00:07:42.803 killing process with pid 71935 00:07:42.803 02:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.803 02:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.803 02:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71935' 00:07:42.803 02:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71935 00:07:42.803 02:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71935 00:07:42.803 ************************************ 00:07:42.803 END TEST locking_app_on_unlocked_coremask 00:07:42.803 ************************************ 00:07:42.803 00:07:42.803 real 0m2.604s 00:07:42.803 user 0m2.983s 00:07:42.803 sys 0m0.878s 00:07:42.803 02:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.803 02:11:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.062 02:11:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:43.062 02:11:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.062 02:11:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.062 02:11:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.062 ************************************ 00:07:43.062 START TEST locking_app_on_locked_coremask 00:07:43.062 ************************************ 00:07:43.062 02:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:43.062 02:11:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71994 00:07:43.062 02:11:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71994 /var/tmp/spdk.sock 00:07:43.062 02:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71994 ']' 00:07:43.062 02:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.062 02:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.062 02:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.062 02:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.062 02:11:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.062 02:11:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:43.062 [2024-11-08 02:11:44.785603] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:43.063 [2024-11-08 02:11:44.785705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71994 ] 00:07:43.063 [2024-11-08 02:11:44.925359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.322 [2024-11-08 02:11:44.959913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.322 [2024-11-08 02:11:44.993214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71997 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71997 /var/tmp/spdk2.sock 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71997 /var/tmp/spdk2.sock 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71997 /var/tmp/spdk2.sock 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71997 ']' 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.322 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:43.322 [2024-11-08 02:11:45.175301] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:43.322 [2024-11-08 02:11:45.175400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71997 ] 00:07:43.580 [2024-11-08 02:11:45.313561] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71994 has claimed it. 00:07:43.580 [2024-11-08 02:11:45.313622] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:44.147 ERROR: process (pid: 71997) is no longer running 00:07:44.147 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71997) - No such process 00:07:44.147 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.147 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:44.147 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:44.147 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:44.147 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:44.147 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:44.147 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71994 00:07:44.147 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71994 00:07:44.147 02:11:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:44.715 02:11:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71994 00:07:44.716 02:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71994 ']' 00:07:44.716 02:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71994 00:07:44.716 02:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:44.716 02:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.716 02:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71994 00:07:44.716 02:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.716 02:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.716 killing process with pid 71994 00:07:44.716 02:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71994' 00:07:44.716 02:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71994 00:07:44.716 02:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71994 00:07:44.975 00:07:44.975 real 0m1.899s 00:07:44.975 user 0m2.302s 00:07:44.975 sys 0m0.525s 00:07:44.975 02:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.975 02:11:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.975 ************************************ 00:07:44.975 END TEST locking_app_on_locked_coremask 00:07:44.975 ************************************ 00:07:44.975 02:11:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:44.975 02:11:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.975 02:11:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.975 02:11:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.975 ************************************ 00:07:44.975 START TEST locking_overlapped_coremask 00:07:44.975 ************************************ 00:07:44.975 02:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:44.975 02:11:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72043 00:07:44.975 02:11:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72043 /var/tmp/spdk.sock 00:07:44.975 02:11:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:44.975 02:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72043 ']' 00:07:44.975 02:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.975 02:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.975 02:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.975 02:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.975 02:11:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.975 [2024-11-08 02:11:46.736546] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:44.975 [2024-11-08 02:11:46.736656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72043 ] 00:07:45.235 [2024-11-08 02:11:46.877202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:45.235 [2024-11-08 02:11:46.910131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.235 [2024-11-08 02:11:46.910257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.235 [2024-11-08 02:11:46.910276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.235 [2024-11-08 02:11:46.944228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72053 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72053 /var/tmp/spdk2.sock 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72053 /var/tmp/spdk2.sock 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72053 /var/tmp/spdk2.sock 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72053 ']' 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.235 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.494 [2024-11-08 02:11:47.129097] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.494 [2024-11-08 02:11:47.129209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72053 ] 00:07:45.495 [2024-11-08 02:11:47.275603] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72043 has claimed it. 00:07:45.495 [2024-11-08 02:11:47.275666] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:46.065 ERROR: process (pid: 72053) is no longer running 00:07:46.065 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72053) - No such process 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72043 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 72043 ']' 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 72043 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72043 00:07:46.065 killing process with pid 72043 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72043' 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 72043 00:07:46.065 02:11:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 72043 00:07:46.372 ************************************ 00:07:46.372 END TEST locking_overlapped_coremask 00:07:46.372 ************************************ 00:07:46.372 00:07:46.372 real 0m1.446s 00:07:46.372 user 0m3.999s 00:07:46.372 sys 0m0.286s 00:07:46.372 02:11:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.372 02:11:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.372 02:11:48 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:46.372 02:11:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.372 02:11:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.372 02:11:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.372 ************************************ 00:07:46.372 START TEST locking_overlapped_coremask_via_rpc 00:07:46.372 ************************************ 00:07:46.372 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:46.372 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72093 00:07:46.372 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72093 /var/tmp/spdk.sock 00:07:46.372 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:46.372 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72093 ']' 00:07:46.372 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.372 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.372 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.372 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.372 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.372 [2024-11-08 02:11:48.221213] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:46.372 [2024-11-08 02:11:48.221483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72093 ] 00:07:46.632 [2024-11-08 02:11:48.351627] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:46.632 [2024-11-08 02:11:48.351658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.632 [2024-11-08 02:11:48.385137] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.632 [2024-11-08 02:11:48.385265] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.632 [2024-11-08 02:11:48.385268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.632 [2024-11-08 02:11:48.422050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.891 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.891 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:46.891 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72098 00:07:46.891 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:46.891 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72098 /var/tmp/spdk2.sock 00:07:46.891 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72098 ']' 00:07:46.891 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:46.891 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.891 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:46.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:46.891 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.891 02:11:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.891 [2024-11-08 02:11:48.614973] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:46.891 [2024-11-08 02:11:48.615281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72098 ] 00:07:46.891 [2024-11-08 02:11:48.763037] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:46.891 [2024-11-08 02:11:48.763091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.149 [2024-11-08 02:11:48.838049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.149 [2024-11-08 02:11:48.839268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:47.149 [2024-11-08 02:11:48.839271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.149 [2024-11-08 02:11:48.907180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.084 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.084 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:48.084 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:48.084 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.084 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.085 [2024-11-08 02:11:49.643295] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72093 has claimed it. 00:07:48.085 request: 00:07:48.085 { 00:07:48.085 "method": "framework_enable_cpumask_locks", 00:07:48.085 "req_id": 1 00:07:48.085 } 00:07:48.085 Got JSON-RPC error response 00:07:48.085 response: 00:07:48.085 { 00:07:48.085 "code": -32603, 00:07:48.085 "message": "Failed to claim CPU core: 2" 00:07:48.085 } 00:07:48.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72093 /var/tmp/spdk.sock 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72093 ']' 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72098 /var/tmp/spdk2.sock 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72098 ']' 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:48.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.085 02:11:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.344 02:11:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.344 02:11:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:48.344 02:11:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:48.344 02:11:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:48.344 02:11:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:48.344 02:11:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:48.344 00:07:48.344 real 0m2.057s 00:07:48.344 user 0m1.214s 00:07:48.344 sys 0m0.143s 00:07:48.344 02:11:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.344 02:11:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.344 ************************************ 00:07:48.344 END TEST locking_overlapped_coremask_via_rpc 00:07:48.344 ************************************ 00:07:48.603 02:11:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:48.603 02:11:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72093 ]] 00:07:48.603 02:11:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72093 00:07:48.603 02:11:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72093 ']' 00:07:48.603 02:11:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72093 00:07:48.603 02:11:50 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:48.603 02:11:50 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.603 02:11:50 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72093 00:07:48.603 killing process with pid 72093 00:07:48.603 02:11:50 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.603 02:11:50 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.603 02:11:50 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72093' 00:07:48.603 02:11:50 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72093 00:07:48.603 02:11:50 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72093 00:07:48.862 02:11:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72098 ]] 00:07:48.862 02:11:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72098 00:07:48.862 02:11:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72098 ']' 00:07:48.862 02:11:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72098 00:07:48.862 02:11:50 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:48.862 02:11:50 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.862 02:11:50 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72098 00:07:48.862 killing process with pid 72098 00:07:48.862 02:11:50 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:48.862 02:11:50 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:48.862 02:11:50 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72098' 00:07:48.862 02:11:50 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72098 00:07:48.862 02:11:50 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72098 00:07:49.121 02:11:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:49.121 02:11:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:49.121 02:11:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72093 ]] 00:07:49.121 02:11:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72093 00:07:49.121 02:11:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72093 ']' 00:07:49.121 02:11:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72093 00:07:49.121 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72093) - No such process 00:07:49.121 Process with pid 72093 is not found 00:07:49.122 02:11:50 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72093 is not found' 00:07:49.122 Process with pid 72098 is not found 00:07:49.122 02:11:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72098 ]] 00:07:49.122 02:11:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72098 00:07:49.122 02:11:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72098 ']' 00:07:49.122 02:11:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72098 00:07:49.122 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72098) - No such process 00:07:49.122 02:11:50 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72098 is not found' 00:07:49.122 02:11:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:49.122 ************************************ 00:07:49.122 END TEST cpu_locks 00:07:49.122 ************************************ 00:07:49.122 00:07:49.122 real 0m13.910s 00:07:49.122 user 0m26.337s 00:07:49.122 sys 0m4.285s 00:07:49.122 02:11:50 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.122 02:11:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.122 ************************************ 00:07:49.122 END TEST event 00:07:49.122 ************************************ 00:07:49.122 00:07:49.122 real 0m40.726s 00:07:49.122 user 1m21.733s 00:07:49.122 sys 0m7.373s 00:07:49.122 02:11:50 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.122 02:11:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:49.122 02:11:50 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:49.122 02:11:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.122 02:11:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.122 02:11:50 -- common/autotest_common.sh@10 -- # set +x 00:07:49.122 ************************************ 00:07:49.122 START TEST thread 00:07:49.122 ************************************ 00:07:49.122 02:11:50 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:49.122 * Looking for test storage... 00:07:49.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:49.122 02:11:50 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:49.122 02:11:50 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:49.122 02:11:50 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:49.381 02:11:51 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:49.381 02:11:51 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.381 02:11:51 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.381 02:11:51 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.381 02:11:51 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.381 02:11:51 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.381 02:11:51 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.381 02:11:51 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.381 02:11:51 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.381 02:11:51 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.381 02:11:51 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.381 02:11:51 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.381 02:11:51 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:49.381 02:11:51 thread -- scripts/common.sh@345 -- # : 1 00:07:49.381 02:11:51 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.381 02:11:51 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.381 02:11:51 thread -- scripts/common.sh@365 -- # decimal 1 00:07:49.381 02:11:51 thread -- scripts/common.sh@353 -- # local d=1 00:07:49.381 02:11:51 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.381 02:11:51 thread -- scripts/common.sh@355 -- # echo 1 00:07:49.381 02:11:51 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.381 02:11:51 thread -- scripts/common.sh@366 -- # decimal 2 00:07:49.381 02:11:51 thread -- scripts/common.sh@353 -- # local d=2 00:07:49.381 02:11:51 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.381 02:11:51 thread -- scripts/common.sh@355 -- # echo 2 00:07:49.381 02:11:51 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.381 02:11:51 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.381 02:11:51 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.381 02:11:51 thread -- scripts/common.sh@368 -- # return 0 00:07:49.381 02:11:51 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.381 02:11:51 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:49.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.381 --rc genhtml_branch_coverage=1 00:07:49.381 --rc genhtml_function_coverage=1 00:07:49.381 --rc genhtml_legend=1 00:07:49.381 --rc geninfo_all_blocks=1 00:07:49.381 --rc geninfo_unexecuted_blocks=1 00:07:49.381 00:07:49.381 ' 00:07:49.381 02:11:51 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:49.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.381 --rc genhtml_branch_coverage=1 00:07:49.381 --rc genhtml_function_coverage=1 00:07:49.381 --rc genhtml_legend=1 00:07:49.381 --rc geninfo_all_blocks=1 00:07:49.381 --rc geninfo_unexecuted_blocks=1 00:07:49.381 00:07:49.381 ' 00:07:49.381 02:11:51 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:49.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.381 --rc genhtml_branch_coverage=1 00:07:49.381 --rc genhtml_function_coverage=1 00:07:49.381 --rc genhtml_legend=1 00:07:49.381 --rc geninfo_all_blocks=1 00:07:49.381 --rc geninfo_unexecuted_blocks=1 00:07:49.381 00:07:49.381 ' 00:07:49.381 02:11:51 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:49.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.381 --rc genhtml_branch_coverage=1 00:07:49.381 --rc genhtml_function_coverage=1 00:07:49.381 --rc genhtml_legend=1 00:07:49.381 --rc geninfo_all_blocks=1 00:07:49.381 --rc geninfo_unexecuted_blocks=1 00:07:49.381 00:07:49.381 ' 00:07:49.381 02:11:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:49.381 02:11:51 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:49.381 02:11:51 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.381 02:11:51 thread -- common/autotest_common.sh@10 -- # set +x 00:07:49.381 ************************************ 00:07:49.381 START TEST thread_poller_perf 00:07:49.381 ************************************ 00:07:49.382 02:11:51 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:49.382 [2024-11-08 02:11:51.102714] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:49.382 [2024-11-08 02:11:51.102798] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72234 ] 00:07:49.382 [2024-11-08 02:11:51.236834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.640 [2024-11-08 02:11:51.270533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.640 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:50.580 [2024-11-08T02:11:52.464Z] ====================================== 00:07:50.580 [2024-11-08T02:11:52.464Z] busy:2206101850 (cyc) 00:07:50.580 [2024-11-08T02:11:52.464Z] total_run_count: 367000 00:07:50.580 [2024-11-08T02:11:52.464Z] tsc_hz: 2200000000 (cyc) 00:07:50.580 [2024-11-08T02:11:52.464Z] ====================================== 00:07:50.580 [2024-11-08T02:11:52.464Z] poller_cost: 6011 (cyc), 2732 (nsec) 00:07:50.580 00:07:50.580 real 0m1.241s 00:07:50.580 user 0m1.094s 00:07:50.580 sys 0m0.040s 00:07:50.580 02:11:52 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.580 02:11:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:50.580 ************************************ 00:07:50.580 END TEST thread_poller_perf 00:07:50.580 ************************************ 00:07:50.580 02:11:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:50.580 02:11:52 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:50.580 02:11:52 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.580 02:11:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.580 ************************************ 00:07:50.580 START TEST thread_poller_perf 00:07:50.580 ************************************ 00:07:50.580 02:11:52 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:50.580 [2024-11-08 02:11:52.392679] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:50.580 [2024-11-08 02:11:52.392759] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72264 ] 00:07:50.841 [2024-11-08 02:11:52.521568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.841 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:50.841 [2024-11-08 02:11:52.555923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.778 [2024-11-08T02:11:53.662Z] ====================================== 00:07:51.778 [2024-11-08T02:11:53.662Z] busy:2201789032 (cyc) 00:07:51.778 [2024-11-08T02:11:53.662Z] total_run_count: 4724000 00:07:51.778 [2024-11-08T02:11:53.662Z] tsc_hz: 2200000000 (cyc) 00:07:51.778 [2024-11-08T02:11:53.662Z] ====================================== 00:07:51.778 [2024-11-08T02:11:53.662Z] poller_cost: 466 (cyc), 211 (nsec) 00:07:51.778 00:07:51.778 real 0m1.236s 00:07:51.778 user 0m1.085s 00:07:51.778 sys 0m0.044s 00:07:51.778 02:11:53 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.778 ************************************ 00:07:51.778 END TEST thread_poller_perf 00:07:51.778 ************************************ 00:07:51.779 02:11:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:51.779 02:11:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:51.779 ************************************ 00:07:51.779 END TEST thread 00:07:51.779 ************************************ 00:07:51.779 00:07:51.779 real 0m2.766s 00:07:51.779 user 0m2.336s 00:07:51.779 sys 0m0.212s 00:07:51.779 02:11:53 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.779 02:11:53 thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.038 02:11:53 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:52.038 02:11:53 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:52.038 02:11:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.038 02:11:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.038 02:11:53 -- common/autotest_common.sh@10 -- # set +x 00:07:52.038 ************************************ 00:07:52.038 START TEST app_cmdline 00:07:52.038 ************************************ 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:52.038 * Looking for test storage... 00:07:52.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.038 02:11:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:52.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.038 --rc genhtml_branch_coverage=1 00:07:52.038 --rc genhtml_function_coverage=1 00:07:52.038 --rc genhtml_legend=1 00:07:52.038 --rc geninfo_all_blocks=1 00:07:52.038 --rc geninfo_unexecuted_blocks=1 00:07:52.038 00:07:52.038 ' 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:52.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.038 --rc genhtml_branch_coverage=1 00:07:52.038 --rc genhtml_function_coverage=1 00:07:52.038 --rc genhtml_legend=1 00:07:52.038 --rc geninfo_all_blocks=1 00:07:52.038 --rc geninfo_unexecuted_blocks=1 00:07:52.038 00:07:52.038 ' 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:52.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.038 --rc genhtml_branch_coverage=1 00:07:52.038 --rc genhtml_function_coverage=1 00:07:52.038 --rc genhtml_legend=1 00:07:52.038 --rc geninfo_all_blocks=1 00:07:52.038 --rc geninfo_unexecuted_blocks=1 00:07:52.038 00:07:52.038 ' 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:52.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.038 --rc genhtml_branch_coverage=1 00:07:52.038 --rc genhtml_function_coverage=1 00:07:52.038 --rc genhtml_legend=1 00:07:52.038 --rc geninfo_all_blocks=1 00:07:52.038 --rc geninfo_unexecuted_blocks=1 00:07:52.038 00:07:52.038 ' 00:07:52.038 02:11:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:52.038 02:11:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72347 00:07:52.038 02:11:53 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:52.038 02:11:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72347 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72347 ']' 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.038 02:11:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:52.297 [2024-11-08 02:11:53.962010] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:52.297 [2024-11-08 02:11:53.962597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72347 ] 00:07:52.297 [2024-11-08 02:11:54.098208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.297 [2024-11-08 02:11:54.131424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.297 [2024-11-08 02:11:54.166118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.235 02:11:54 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.235 02:11:54 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:53.235 02:11:54 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:53.494 { 00:07:53.494 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:53.494 "fields": { 00:07:53.494 "major": 24, 00:07:53.494 "minor": 9, 00:07:53.494 "patch": 1, 00:07:53.494 "suffix": "-pre", 00:07:53.494 "commit": "b18e1bd62" 00:07:53.494 } 00:07:53.494 } 00:07:53.494 02:11:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:53.494 02:11:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:53.494 02:11:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:53.494 02:11:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:53.494 02:11:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:53.494 02:11:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.494 02:11:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.494 02:11:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:53.494 02:11:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:53.494 02:11:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:53.494 02:11:55 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.753 request: 00:07:53.753 { 00:07:53.753 "method": "env_dpdk_get_mem_stats", 00:07:53.753 "req_id": 1 00:07:53.753 } 00:07:53.753 Got JSON-RPC error response 00:07:53.753 response: 00:07:53.753 { 00:07:53.754 "code": -32601, 00:07:53.754 "message": "Method not found" 00:07:53.754 } 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:53.754 02:11:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72347 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72347 ']' 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72347 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72347 00:07:53.754 killing process with pid 72347 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72347' 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@969 -- # kill 72347 00:07:53.754 02:11:55 app_cmdline -- common/autotest_common.sh@974 -- # wait 72347 00:07:54.013 ************************************ 00:07:54.013 END TEST app_cmdline 00:07:54.013 ************************************ 00:07:54.013 00:07:54.013 real 0m2.114s 00:07:54.013 user 0m2.793s 00:07:54.013 sys 0m0.368s 00:07:54.013 02:11:55 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.013 02:11:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:54.013 02:11:55 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:54.013 02:11:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.013 02:11:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.013 02:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:54.013 ************************************ 00:07:54.013 START TEST version 00:07:54.013 ************************************ 00:07:54.013 02:11:55 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:54.272 * Looking for test storage... 00:07:54.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:54.272 02:11:55 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:54.272 02:11:55 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:54.272 02:11:55 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:54.272 02:11:56 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:54.272 02:11:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.272 02:11:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.272 02:11:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.272 02:11:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.272 02:11:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.272 02:11:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.272 02:11:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.272 02:11:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.272 02:11:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.272 02:11:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.272 02:11:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.272 02:11:56 version -- scripts/common.sh@344 -- # case "$op" in 00:07:54.272 02:11:56 version -- scripts/common.sh@345 -- # : 1 00:07:54.272 02:11:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.272 02:11:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.272 02:11:56 version -- scripts/common.sh@365 -- # decimal 1 00:07:54.272 02:11:56 version -- scripts/common.sh@353 -- # local d=1 00:07:54.272 02:11:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.272 02:11:56 version -- scripts/common.sh@355 -- # echo 1 00:07:54.272 02:11:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.272 02:11:56 version -- scripts/common.sh@366 -- # decimal 2 00:07:54.272 02:11:56 version -- scripts/common.sh@353 -- # local d=2 00:07:54.272 02:11:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.272 02:11:56 version -- scripts/common.sh@355 -- # echo 2 00:07:54.272 02:11:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.272 02:11:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.273 02:11:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.273 02:11:56 version -- scripts/common.sh@368 -- # return 0 00:07:54.273 02:11:56 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.273 02:11:56 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:54.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.273 --rc genhtml_branch_coverage=1 00:07:54.273 --rc genhtml_function_coverage=1 00:07:54.273 --rc genhtml_legend=1 00:07:54.273 --rc geninfo_all_blocks=1 00:07:54.273 --rc geninfo_unexecuted_blocks=1 00:07:54.273 00:07:54.273 ' 00:07:54.273 02:11:56 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:54.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.273 --rc genhtml_branch_coverage=1 00:07:54.273 --rc genhtml_function_coverage=1 00:07:54.273 --rc genhtml_legend=1 00:07:54.273 --rc geninfo_all_blocks=1 00:07:54.273 --rc geninfo_unexecuted_blocks=1 00:07:54.273 00:07:54.273 ' 00:07:54.273 02:11:56 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:54.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.273 --rc genhtml_branch_coverage=1 00:07:54.273 --rc genhtml_function_coverage=1 00:07:54.273 --rc genhtml_legend=1 00:07:54.273 --rc geninfo_all_blocks=1 00:07:54.273 --rc geninfo_unexecuted_blocks=1 00:07:54.273 00:07:54.273 ' 00:07:54.273 02:11:56 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:54.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.273 --rc genhtml_branch_coverage=1 00:07:54.273 --rc genhtml_function_coverage=1 00:07:54.273 --rc genhtml_legend=1 00:07:54.273 --rc geninfo_all_blocks=1 00:07:54.273 --rc geninfo_unexecuted_blocks=1 00:07:54.273 00:07:54.273 ' 00:07:54.273 02:11:56 version -- app/version.sh@17 -- # get_header_version major 00:07:54.273 02:11:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:54.273 02:11:56 version -- app/version.sh@14 -- # cut -f2 00:07:54.273 02:11:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.273 02:11:56 version -- app/version.sh@17 -- # major=24 00:07:54.273 02:11:56 version -- app/version.sh@18 -- # get_header_version minor 00:07:54.273 02:11:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:54.273 02:11:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.273 02:11:56 version -- app/version.sh@14 -- # cut -f2 00:07:54.273 02:11:56 version -- app/version.sh@18 -- # minor=9 00:07:54.273 02:11:56 version -- app/version.sh@19 -- # get_header_version patch 00:07:54.273 02:11:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:54.273 02:11:56 version -- app/version.sh@14 -- # cut -f2 00:07:54.273 02:11:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.273 02:11:56 version -- app/version.sh@19 -- # patch=1 00:07:54.273 02:11:56 version -- app/version.sh@20 -- # get_header_version suffix 00:07:54.273 02:11:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:54.273 02:11:56 version -- app/version.sh@14 -- # cut -f2 00:07:54.273 02:11:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.273 02:11:56 version -- app/version.sh@20 -- # suffix=-pre 00:07:54.273 02:11:56 version -- app/version.sh@22 -- # version=24.9 00:07:54.273 02:11:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:54.273 02:11:56 version -- app/version.sh@25 -- # version=24.9.1 00:07:54.273 02:11:56 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:54.273 02:11:56 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:54.273 02:11:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:54.273 02:11:56 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:54.273 02:11:56 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:54.273 00:07:54.273 real 0m0.243s 00:07:54.273 user 0m0.152s 00:07:54.273 sys 0m0.127s 00:07:54.273 02:11:56 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.273 02:11:56 version -- common/autotest_common.sh@10 -- # set +x 00:07:54.273 ************************************ 00:07:54.273 END TEST version 00:07:54.273 ************************************ 00:07:54.532 02:11:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:54.532 02:11:56 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:54.532 02:11:56 -- spdk/autotest.sh@194 -- # uname -s 00:07:54.532 02:11:56 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:54.532 02:11:56 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:54.532 02:11:56 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:54.532 02:11:56 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:54.532 02:11:56 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:54.532 02:11:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.532 02:11:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.533 02:11:56 -- common/autotest_common.sh@10 -- # set +x 00:07:54.533 ************************************ 00:07:54.533 START TEST spdk_dd 00:07:54.533 ************************************ 00:07:54.533 02:11:56 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:54.533 * Looking for test storage... 00:07:54.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:54.533 02:11:56 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:54.533 02:11:56 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:07:54.533 02:11:56 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:54.533 02:11:56 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:54.533 02:11:56 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.533 02:11:56 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:54.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.533 --rc genhtml_branch_coverage=1 00:07:54.533 --rc genhtml_function_coverage=1 00:07:54.533 --rc genhtml_legend=1 00:07:54.533 --rc geninfo_all_blocks=1 00:07:54.533 --rc geninfo_unexecuted_blocks=1 00:07:54.533 00:07:54.533 ' 00:07:54.533 02:11:56 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:54.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.533 --rc genhtml_branch_coverage=1 00:07:54.533 --rc genhtml_function_coverage=1 00:07:54.533 --rc genhtml_legend=1 00:07:54.533 --rc geninfo_all_blocks=1 00:07:54.533 --rc geninfo_unexecuted_blocks=1 00:07:54.533 00:07:54.533 ' 00:07:54.533 02:11:56 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:54.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.533 --rc genhtml_branch_coverage=1 00:07:54.533 --rc genhtml_function_coverage=1 00:07:54.533 --rc genhtml_legend=1 00:07:54.533 --rc geninfo_all_blocks=1 00:07:54.533 --rc geninfo_unexecuted_blocks=1 00:07:54.533 00:07:54.533 ' 00:07:54.533 02:11:56 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:54.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.533 --rc genhtml_branch_coverage=1 00:07:54.533 --rc genhtml_function_coverage=1 00:07:54.533 --rc genhtml_legend=1 00:07:54.533 --rc geninfo_all_blocks=1 00:07:54.533 --rc geninfo_unexecuted_blocks=1 00:07:54.533 00:07:54.533 ' 00:07:54.533 02:11:56 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.533 02:11:56 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.533 02:11:56 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.533 02:11:56 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.533 02:11:56 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.533 02:11:56 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:54.533 02:11:56 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.533 02:11:56 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:55.103 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:55.103 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:55.103 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:55.103 02:11:56 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:55.103 02:11:56 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:55.103 02:11:56 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:55.104 02:11:56 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:55.104 02:11:56 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:55.104 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:55.105 * spdk_dd linked to liburing 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:55.105 02:11:56 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:55.105 02:11:56 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:55.106 02:11:56 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:55.106 02:11:56 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:55.106 02:11:56 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:55.106 02:11:56 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:55.106 02:11:56 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:55.106 02:11:56 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:55.106 02:11:56 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:55.106 02:11:56 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:55.106 02:11:56 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:55.106 02:11:56 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:07:55.106 02:11:56 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:55.106 02:11:56 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:55.106 02:11:56 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:55.106 02:11:56 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:55.106 02:11:56 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:55.106 02:11:56 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:55.106 02:11:56 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:55.106 02:11:56 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.106 02:11:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:55.106 ************************************ 00:07:55.106 START TEST spdk_dd_basic_rw 00:07:55.106 ************************************ 00:07:55.106 02:11:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:55.366 * Looking for test storage... 00:07:55.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:55.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.366 --rc genhtml_branch_coverage=1 00:07:55.366 --rc genhtml_function_coverage=1 00:07:55.366 --rc genhtml_legend=1 00:07:55.366 --rc geninfo_all_blocks=1 00:07:55.366 --rc geninfo_unexecuted_blocks=1 00:07:55.366 00:07:55.366 ' 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:55.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.366 --rc genhtml_branch_coverage=1 00:07:55.366 --rc genhtml_function_coverage=1 00:07:55.366 --rc genhtml_legend=1 00:07:55.366 --rc geninfo_all_blocks=1 00:07:55.366 --rc geninfo_unexecuted_blocks=1 00:07:55.366 00:07:55.366 ' 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:55.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.366 --rc genhtml_branch_coverage=1 00:07:55.366 --rc genhtml_function_coverage=1 00:07:55.366 --rc genhtml_legend=1 00:07:55.366 --rc geninfo_all_blocks=1 00:07:55.366 --rc geninfo_unexecuted_blocks=1 00:07:55.366 00:07:55.366 ' 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:55.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.366 --rc genhtml_branch_coverage=1 00:07:55.366 --rc genhtml_function_coverage=1 00:07:55.366 --rc genhtml_legend=1 00:07:55.366 --rc geninfo_all_blocks=1 00:07:55.366 --rc geninfo_unexecuted_blocks=1 00:07:55.366 00:07:55.366 ' 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:55.366 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:55.628 ************************************ 00:07:55.628 START TEST dd_bs_lt_native_bs 00:07:55.628 ************************************ 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.628 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.629 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.629 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.629 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.629 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.629 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.629 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:55.629 { 00:07:55.629 "subsystems": [ 00:07:55.629 { 00:07:55.629 "subsystem": "bdev", 00:07:55.629 "config": [ 00:07:55.629 { 00:07:55.629 "params": { 00:07:55.629 "trtype": "pcie", 00:07:55.629 "traddr": "0000:00:10.0", 00:07:55.629 "name": "Nvme0" 00:07:55.629 }, 00:07:55.629 "method": "bdev_nvme_attach_controller" 00:07:55.629 }, 00:07:55.629 { 00:07:55.629 "method": "bdev_wait_for_examine" 00:07:55.629 } 00:07:55.629 ] 00:07:55.629 } 00:07:55.629 ] 00:07:55.629 } 00:07:55.629 [2024-11-08 02:11:57.397703] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:55.629 [2024-11-08 02:11:57.397827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72698 ] 00:07:55.888 [2024-11-08 02:11:57.539546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.888 [2024-11-08 02:11:57.580880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.888 [2024-11-08 02:11:57.613181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.888 [2024-11-08 02:11:57.704612] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:55.888 [2024-11-08 02:11:57.704923] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.888 [2024-11-08 02:11:57.770000] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:56.147 00:07:56.147 real 0m0.509s 00:07:56.147 user 0m0.340s 00:07:56.147 sys 0m0.125s 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.147 ************************************ 00:07:56.147 END TEST dd_bs_lt_native_bs 00:07:56.147 ************************************ 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:56.147 ************************************ 00:07:56.147 START TEST dd_rw 00:07:56.147 ************************************ 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:56.147 02:11:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:56.716 02:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:56.716 02:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:56.716 02:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:56.716 02:11:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:56.716 { 00:07:56.716 "subsystems": [ 00:07:56.716 { 00:07:56.716 "subsystem": "bdev", 00:07:56.716 "config": [ 00:07:56.716 { 00:07:56.716 "params": { 00:07:56.716 "trtype": "pcie", 00:07:56.716 "traddr": "0000:00:10.0", 00:07:56.716 "name": "Nvme0" 00:07:56.716 }, 00:07:56.716 "method": "bdev_nvme_attach_controller" 00:07:56.716 }, 00:07:56.716 { 00:07:56.716 "method": "bdev_wait_for_examine" 00:07:56.716 } 00:07:56.716 ] 00:07:56.716 } 00:07:56.716 ] 00:07:56.716 } 00:07:56.716 [2024-11-08 02:11:58.575739] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:56.716 [2024-11-08 02:11:58.575836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72729 ] 00:07:56.975 [2024-11-08 02:11:58.715438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.975 [2024-11-08 02:11:58.750714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.975 [2024-11-08 02:11:58.778587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.235  [2024-11-08T02:11:59.119Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:57.235 00:07:57.235 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:57.235 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:57.235 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:57.235 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.235 { 00:07:57.235 "subsystems": [ 00:07:57.235 { 00:07:57.235 "subsystem": "bdev", 00:07:57.235 "config": [ 00:07:57.235 { 00:07:57.235 "params": { 00:07:57.235 "trtype": "pcie", 00:07:57.235 "traddr": "0000:00:10.0", 00:07:57.235 "name": "Nvme0" 00:07:57.235 }, 00:07:57.235 "method": "bdev_nvme_attach_controller" 00:07:57.235 }, 00:07:57.235 { 00:07:57.235 "method": "bdev_wait_for_examine" 00:07:57.235 } 00:07:57.235 ] 00:07:57.235 } 00:07:57.235 ] 00:07:57.235 } 00:07:57.235 [2024-11-08 02:11:59.071035] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:57.235 [2024-11-08 02:11:59.071177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72743 ] 00:07:57.492 [2024-11-08 02:11:59.209919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.492 [2024-11-08 02:11:59.245370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.492 [2024-11-08 02:11:59.276985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.492  [2024-11-08T02:11:59.635Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:57.751 00:07:57.751 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.751 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:57.751 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:57.751 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:57.751 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:57.751 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:57.751 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:57.751 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:57.751 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:57.751 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:57.751 02:11:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.751 [2024-11-08 02:11:59.578577] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:57.751 [2024-11-08 02:11:59.578697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72758 ] 00:07:57.751 { 00:07:57.751 "subsystems": [ 00:07:57.751 { 00:07:57.751 "subsystem": "bdev", 00:07:57.751 "config": [ 00:07:57.751 { 00:07:57.751 "params": { 00:07:57.751 "trtype": "pcie", 00:07:57.751 "traddr": "0000:00:10.0", 00:07:57.751 "name": "Nvme0" 00:07:57.751 }, 00:07:57.751 "method": "bdev_nvme_attach_controller" 00:07:57.751 }, 00:07:57.751 { 00:07:57.751 "method": "bdev_wait_for_examine" 00:07:57.751 } 00:07:57.751 ] 00:07:57.751 } 00:07:57.751 ] 00:07:57.751 } 00:07:58.009 [2024-11-08 02:11:59.722244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.009 [2024-11-08 02:11:59.757390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.009 [2024-11-08 02:11:59.786142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.009  [2024-11-08T02:12:00.151Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:58.267 00:07:58.267 02:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:58.267 02:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:58.267 02:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:58.267 02:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:58.267 02:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:58.267 02:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:58.267 02:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:58.835 02:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:58.835 02:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:58.835 02:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:58.835 02:12:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:58.835 { 00:07:58.835 "subsystems": [ 00:07:58.835 { 00:07:58.835 "subsystem": "bdev", 00:07:58.835 "config": [ 00:07:58.835 { 00:07:58.835 "params": { 00:07:58.835 "trtype": "pcie", 00:07:58.835 "traddr": "0000:00:10.0", 00:07:58.835 "name": "Nvme0" 00:07:58.835 }, 00:07:58.835 "method": "bdev_nvme_attach_controller" 00:07:58.835 }, 00:07:58.835 { 00:07:58.835 "method": "bdev_wait_for_examine" 00:07:58.835 } 00:07:58.835 ] 00:07:58.835 } 00:07:58.835 ] 00:07:58.835 } 00:07:58.835 [2024-11-08 02:12:00.657678] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:58.835 [2024-11-08 02:12:00.657992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72777 ] 00:07:59.096 [2024-11-08 02:12:00.796995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.096 [2024-11-08 02:12:00.831393] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.096 [2024-11-08 02:12:00.859885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.096  [2024-11-08T02:12:01.239Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:59.355 00:07:59.355 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:59.355 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:59.355 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:59.355 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.355 { 00:07:59.355 "subsystems": [ 00:07:59.355 { 00:07:59.355 "subsystem": "bdev", 00:07:59.355 "config": [ 00:07:59.355 { 00:07:59.355 "params": { 00:07:59.355 "trtype": "pcie", 00:07:59.355 "traddr": "0000:00:10.0", 00:07:59.355 "name": "Nvme0" 00:07:59.355 }, 00:07:59.355 "method": "bdev_nvme_attach_controller" 00:07:59.355 }, 00:07:59.355 { 00:07:59.355 "method": "bdev_wait_for_examine" 00:07:59.355 } 00:07:59.355 ] 00:07:59.355 } 00:07:59.355 ] 00:07:59.355 } 00:07:59.355 [2024-11-08 02:12:01.153048] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:59.355 [2024-11-08 02:12:01.153180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72791 ] 00:07:59.613 [2024-11-08 02:12:01.289089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.613 [2024-11-08 02:12:01.325129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.613 [2024-11-08 02:12:01.356438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.613  [2024-11-08T02:12:01.756Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:59.872 00:07:59.872 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.872 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:59.872 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:59.872 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:59.872 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:59.872 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:59.872 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:59.872 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:59.872 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:59.872 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:59.872 02:12:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.872 [2024-11-08 02:12:01.652896] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:59.872 [2024-11-08 02:12:01.653054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72806 ] 00:07:59.872 { 00:07:59.872 "subsystems": [ 00:07:59.872 { 00:07:59.872 "subsystem": "bdev", 00:07:59.872 "config": [ 00:07:59.872 { 00:07:59.872 "params": { 00:07:59.872 "trtype": "pcie", 00:07:59.872 "traddr": "0000:00:10.0", 00:07:59.872 "name": "Nvme0" 00:07:59.872 }, 00:07:59.872 "method": "bdev_nvme_attach_controller" 00:07:59.872 }, 00:07:59.872 { 00:07:59.872 "method": "bdev_wait_for_examine" 00:07:59.872 } 00:07:59.872 ] 00:07:59.872 } 00:07:59.872 ] 00:07:59.872 } 00:08:00.131 [2024-11-08 02:12:01.796739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.131 [2024-11-08 02:12:01.831243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.131 [2024-11-08 02:12:01.859809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.131  [2024-11-08T02:12:02.274Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:00.390 00:08:00.390 02:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:00.390 02:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:00.390 02:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:00.390 02:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:00.391 02:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:00.391 02:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:00.391 02:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:00.391 02:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:00.958 02:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:00.958 02:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:00.958 02:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:00.958 02:12:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:00.958 [2024-11-08 02:12:02.721887] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:00.958 [2024-11-08 02:12:02.722229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72825 ] 00:08:00.958 { 00:08:00.958 "subsystems": [ 00:08:00.958 { 00:08:00.958 "subsystem": "bdev", 00:08:00.958 "config": [ 00:08:00.958 { 00:08:00.958 "params": { 00:08:00.958 "trtype": "pcie", 00:08:00.958 "traddr": "0000:00:10.0", 00:08:00.958 "name": "Nvme0" 00:08:00.958 }, 00:08:00.958 "method": "bdev_nvme_attach_controller" 00:08:00.958 }, 00:08:00.958 { 00:08:00.958 "method": "bdev_wait_for_examine" 00:08:00.958 } 00:08:00.958 ] 00:08:00.958 } 00:08:00.958 ] 00:08:00.958 } 00:08:01.217 [2024-11-08 02:12:02.862508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.217 [2024-11-08 02:12:02.904466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.217 [2024-11-08 02:12:02.940260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.217  [2024-11-08T02:12:03.360Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:01.476 00:08:01.476 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:01.476 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:01.476 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:01.476 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:01.476 [2024-11-08 02:12:03.234840] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:01.476 [2024-11-08 02:12:03.235281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72839 ] 00:08:01.476 { 00:08:01.476 "subsystems": [ 00:08:01.476 { 00:08:01.476 "subsystem": "bdev", 00:08:01.476 "config": [ 00:08:01.476 { 00:08:01.476 "params": { 00:08:01.476 "trtype": "pcie", 00:08:01.476 "traddr": "0000:00:10.0", 00:08:01.476 "name": "Nvme0" 00:08:01.476 }, 00:08:01.476 "method": "bdev_nvme_attach_controller" 00:08:01.476 }, 00:08:01.476 { 00:08:01.476 "method": "bdev_wait_for_examine" 00:08:01.476 } 00:08:01.476 ] 00:08:01.476 } 00:08:01.476 ] 00:08:01.476 } 00:08:01.735 [2024-11-08 02:12:03.375857] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.735 [2024-11-08 02:12:03.410444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.735 [2024-11-08 02:12:03.441888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.735  [2024-11-08T02:12:03.879Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:01.995 00:08:01.995 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.995 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:01.995 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:01.995 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:01.995 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:01.995 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:01.995 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:01.995 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:01.995 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:01.995 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:01.995 02:12:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:01.995 [2024-11-08 02:12:03.730507] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:01.995 [2024-11-08 02:12:03.730618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72854 ] 00:08:01.995 { 00:08:01.995 "subsystems": [ 00:08:01.995 { 00:08:01.995 "subsystem": "bdev", 00:08:01.995 "config": [ 00:08:01.995 { 00:08:01.995 "params": { 00:08:01.995 "trtype": "pcie", 00:08:01.995 "traddr": "0000:00:10.0", 00:08:01.995 "name": "Nvme0" 00:08:01.995 }, 00:08:01.995 "method": "bdev_nvme_attach_controller" 00:08:01.995 }, 00:08:01.995 { 00:08:01.995 "method": "bdev_wait_for_examine" 00:08:01.995 } 00:08:01.995 ] 00:08:01.995 } 00:08:01.995 ] 00:08:01.995 } 00:08:01.995 [2024-11-08 02:12:03.864307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.254 [2024-11-08 02:12:03.899321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.254 [2024-11-08 02:12:03.927020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.254  [2024-11-08T02:12:04.397Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:02.513 00:08:02.513 02:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:02.513 02:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:02.513 02:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:02.513 02:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:02.513 02:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:02.513 02:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:02.513 02:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.094 02:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:03.094 02:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:03.094 02:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.094 02:12:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.094 { 00:08:03.094 "subsystems": [ 00:08:03.094 { 00:08:03.094 "subsystem": "bdev", 00:08:03.094 "config": [ 00:08:03.094 { 00:08:03.094 "params": { 00:08:03.094 "trtype": "pcie", 00:08:03.094 "traddr": "0000:00:10.0", 00:08:03.094 "name": "Nvme0" 00:08:03.094 }, 00:08:03.094 "method": "bdev_nvme_attach_controller" 00:08:03.094 }, 00:08:03.094 { 00:08:03.094 "method": "bdev_wait_for_examine" 00:08:03.094 } 00:08:03.094 ] 00:08:03.094 } 00:08:03.094 ] 00:08:03.094 } 00:08:03.094 [2024-11-08 02:12:04.754477] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:03.094 [2024-11-08 02:12:04.754608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72873 ] 00:08:03.094 [2024-11-08 02:12:04.897170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.094 [2024-11-08 02:12:04.933004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.094 [2024-11-08 02:12:04.961426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.368  [2024-11-08T02:12:05.252Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:03.368 00:08:03.368 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:03.368 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:03.368 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.368 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.368 { 00:08:03.368 "subsystems": [ 00:08:03.368 { 00:08:03.368 "subsystem": "bdev", 00:08:03.368 "config": [ 00:08:03.368 { 00:08:03.368 "params": { 00:08:03.368 "trtype": "pcie", 00:08:03.368 "traddr": "0000:00:10.0", 00:08:03.368 "name": "Nvme0" 00:08:03.368 }, 00:08:03.368 "method": "bdev_nvme_attach_controller" 00:08:03.368 }, 00:08:03.368 { 00:08:03.368 "method": "bdev_wait_for_examine" 00:08:03.368 } 00:08:03.368 ] 00:08:03.368 } 00:08:03.368 ] 00:08:03.368 } 00:08:03.368 [2024-11-08 02:12:05.241799] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:03.368 [2024-11-08 02:12:05.241924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72887 ] 00:08:03.627 [2024-11-08 02:12:05.381180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.627 [2024-11-08 02:12:05.413158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.627 [2024-11-08 02:12:05.440202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.886  [2024-11-08T02:12:05.770Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:03.886 00:08:03.886 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:03.886 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:03.886 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:03.886 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:03.886 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:03.886 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:03.886 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:03.886 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:03.886 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:03.886 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.886 02:12:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.886 { 00:08:03.886 "subsystems": [ 00:08:03.886 { 00:08:03.886 "subsystem": "bdev", 00:08:03.886 "config": [ 00:08:03.886 { 00:08:03.886 "params": { 00:08:03.886 "trtype": "pcie", 00:08:03.886 "traddr": "0000:00:10.0", 00:08:03.886 "name": "Nvme0" 00:08:03.886 }, 00:08:03.886 "method": "bdev_nvme_attach_controller" 00:08:03.886 }, 00:08:03.886 { 00:08:03.886 "method": "bdev_wait_for_examine" 00:08:03.886 } 00:08:03.886 ] 00:08:03.886 } 00:08:03.886 ] 00:08:03.886 } 00:08:03.886 [2024-11-08 02:12:05.738894] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:03.886 [2024-11-08 02:12:05.739262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72902 ] 00:08:04.145 [2024-11-08 02:12:05.882362] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.145 [2024-11-08 02:12:05.918692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.145 [2024-11-08 02:12:05.946922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.403  [2024-11-08T02:12:06.287Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:04.403 00:08:04.403 02:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:04.403 02:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:04.403 02:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:04.403 02:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:04.403 02:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:04.403 02:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:04.403 02:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:04.403 02:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.971 02:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:04.971 02:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:04.971 02:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:04.971 02:12:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.971 [2024-11-08 02:12:06.630686] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:04.971 [2024-11-08 02:12:06.631014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72920 ] 00:08:04.971 { 00:08:04.971 "subsystems": [ 00:08:04.971 { 00:08:04.971 "subsystem": "bdev", 00:08:04.971 "config": [ 00:08:04.971 { 00:08:04.971 "params": { 00:08:04.971 "trtype": "pcie", 00:08:04.971 "traddr": "0000:00:10.0", 00:08:04.971 "name": "Nvme0" 00:08:04.971 }, 00:08:04.971 "method": "bdev_nvme_attach_controller" 00:08:04.971 }, 00:08:04.971 { 00:08:04.971 "method": "bdev_wait_for_examine" 00:08:04.971 } 00:08:04.971 ] 00:08:04.971 } 00:08:04.971 ] 00:08:04.971 } 00:08:04.971 [2024-11-08 02:12:06.763136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.971 [2024-11-08 02:12:06.799782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.971 [2024-11-08 02:12:06.828150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.230  [2024-11-08T02:12:07.114Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:05.230 00:08:05.230 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:05.230 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:05.230 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:05.230 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.230 [2024-11-08 02:12:07.108984] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:05.230 [2024-11-08 02:12:07.109094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72935 ] 00:08:05.230 { 00:08:05.230 "subsystems": [ 00:08:05.230 { 00:08:05.230 "subsystem": "bdev", 00:08:05.230 "config": [ 00:08:05.230 { 00:08:05.230 "params": { 00:08:05.230 "trtype": "pcie", 00:08:05.230 "traddr": "0000:00:10.0", 00:08:05.230 "name": "Nvme0" 00:08:05.230 }, 00:08:05.230 "method": "bdev_nvme_attach_controller" 00:08:05.230 }, 00:08:05.230 { 00:08:05.230 "method": "bdev_wait_for_examine" 00:08:05.230 } 00:08:05.230 ] 00:08:05.230 } 00:08:05.230 ] 00:08:05.230 } 00:08:05.489 [2024-11-08 02:12:07.247753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.489 [2024-11-08 02:12:07.287931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.489 [2024-11-08 02:12:07.318910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.747  [2024-11-08T02:12:07.631Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:05.747 00:08:05.747 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.747 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:05.747 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:05.747 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:05.747 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:05.747 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:05.747 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:05.747 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:05.747 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:05.747 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:05.747 02:12:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.747 [2024-11-08 02:12:07.606070] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:05.748 [2024-11-08 02:12:07.606340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72945 ] 00:08:05.748 { 00:08:05.748 "subsystems": [ 00:08:05.748 { 00:08:05.748 "subsystem": "bdev", 00:08:05.748 "config": [ 00:08:05.748 { 00:08:05.748 "params": { 00:08:05.748 "trtype": "pcie", 00:08:05.748 "traddr": "0000:00:10.0", 00:08:05.748 "name": "Nvme0" 00:08:05.748 }, 00:08:05.748 "method": "bdev_nvme_attach_controller" 00:08:05.748 }, 00:08:05.748 { 00:08:05.748 "method": "bdev_wait_for_examine" 00:08:05.748 } 00:08:05.748 ] 00:08:05.748 } 00:08:05.748 ] 00:08:05.748 } 00:08:06.006 [2024-11-08 02:12:07.746253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.006 [2024-11-08 02:12:07.788561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.006 [2024-11-08 02:12:07.819067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.264  [2024-11-08T02:12:08.148Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:06.264 00:08:06.264 02:12:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:06.264 02:12:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:06.264 02:12:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:06.264 02:12:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:06.264 02:12:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:06.264 02:12:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:06.264 02:12:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.832 02:12:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:06.832 02:12:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:06.832 02:12:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:06.832 02:12:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.832 { 00:08:06.832 "subsystems": [ 00:08:06.832 { 00:08:06.832 "subsystem": "bdev", 00:08:06.832 "config": [ 00:08:06.832 { 00:08:06.832 "params": { 00:08:06.832 "trtype": "pcie", 00:08:06.832 "traddr": "0000:00:10.0", 00:08:06.832 "name": "Nvme0" 00:08:06.832 }, 00:08:06.832 "method": "bdev_nvme_attach_controller" 00:08:06.832 }, 00:08:06.832 { 00:08:06.832 "method": "bdev_wait_for_examine" 00:08:06.832 } 00:08:06.832 ] 00:08:06.832 } 00:08:06.832 ] 00:08:06.832 } 00:08:06.832 [2024-11-08 02:12:08.620266] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:06.832 [2024-11-08 02:12:08.620510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72964 ] 00:08:07.092 [2024-11-08 02:12:08.759835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.092 [2024-11-08 02:12:08.796505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.092 [2024-11-08 02:12:08.825235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.092  [2024-11-08T02:12:09.234Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:07.350 00:08:07.350 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:07.350 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:07.350 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:07.350 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.350 [2024-11-08 02:12:09.107323] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:07.350 [2024-11-08 02:12:09.107419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72977 ] 00:08:07.350 { 00:08:07.350 "subsystems": [ 00:08:07.350 { 00:08:07.350 "subsystem": "bdev", 00:08:07.350 "config": [ 00:08:07.350 { 00:08:07.350 "params": { 00:08:07.350 "trtype": "pcie", 00:08:07.350 "traddr": "0000:00:10.0", 00:08:07.350 "name": "Nvme0" 00:08:07.350 }, 00:08:07.350 "method": "bdev_nvme_attach_controller" 00:08:07.350 }, 00:08:07.350 { 00:08:07.350 "method": "bdev_wait_for_examine" 00:08:07.350 } 00:08:07.350 ] 00:08:07.350 } 00:08:07.350 ] 00:08:07.350 } 00:08:07.608 [2024-11-08 02:12:09.247691] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.608 [2024-11-08 02:12:09.284984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.608 [2024-11-08 02:12:09.314742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.608  [2024-11-08T02:12:09.750Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:07.866 00:08:07.866 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.866 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:07.866 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:07.866 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:07.866 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:07.866 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:07.866 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:07.866 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:07.866 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:07.866 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:07.867 02:12:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.867 [2024-11-08 02:12:09.599511] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:07.867 [2024-11-08 02:12:09.599614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72993 ] 00:08:07.867 { 00:08:07.867 "subsystems": [ 00:08:07.867 { 00:08:07.867 "subsystem": "bdev", 00:08:07.867 "config": [ 00:08:07.867 { 00:08:07.867 "params": { 00:08:07.867 "trtype": "pcie", 00:08:07.867 "traddr": "0000:00:10.0", 00:08:07.867 "name": "Nvme0" 00:08:07.867 }, 00:08:07.867 "method": "bdev_nvme_attach_controller" 00:08:07.867 }, 00:08:07.867 { 00:08:07.867 "method": "bdev_wait_for_examine" 00:08:07.867 } 00:08:07.867 ] 00:08:07.867 } 00:08:07.867 ] 00:08:07.867 } 00:08:07.867 [2024-11-08 02:12:09.739861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.125 [2024-11-08 02:12:09.777920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.125 [2024-11-08 02:12:09.807672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.125  [2024-11-08T02:12:10.268Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:08.384 00:08:08.384 ************************************ 00:08:08.384 END TEST dd_rw 00:08:08.384 ************************************ 00:08:08.384 00:08:08.384 real 0m12.133s 00:08:08.384 user 0m8.985s 00:08:08.384 sys 0m3.791s 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.384 ************************************ 00:08:08.384 START TEST dd_rw_offset 00:08:08.384 ************************************ 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=plhv3tk1gaz9q88nfaqy892j1uws6r3xjfv6uuewgnzcf3b8hdwz5ed4zeo4sqn3n3m83yvgg1p5vcic337f4ltugntel69rdbj0babkbl6k94lrn40qajleyytijrc0t3p5vunk4c3jjtr73693s3h322ryazp5gs0ugx6nwxqmyw2jj7my03axiu6n1l5yx4dryi4vujsmj8f3ekmnt7ewrie553cxilvqgxidgm71jedwyt0yl21ve6r86qajwwgrek391fls49shgdvvk3e151g5dybneya20xio6pigqv22j0u1zvbp7i4qt2srpsmemg1i4no6fpr8pa1kb4e1tnyifr2qideaxoph8wb3hi7bjws3qsdjy0ybv0xoyrd7g8lo5gjfv4g60n356bikutz84o7jaerwxt6zndwxfun4ikc1ly0p9xdovox1439oaa98wt7ok1zwtghwglgozwmzqfrjp4mpxxrb4ydy2py9ap9kybyb4wl399va1qrjriztcf0cjt7ixze77pt5djydm2r6qknmzu57u2gxmybwu13r36z13rvgiraz682o1ejp72lirr9ew9ho74s3hzl6oqq30aw5fs38io8qmo68dmffdtrz1qh02i1ss3ff9q3f58r7d4j94p79hvmpnux2waixm3tnujbror0o1htuo57m599i3v29t1zucd83yugl77auuagneh8a0vfn2t098hqnhxi4oho2dapbrzq4t8cchel7327989an4u9b7arl48d91qc0d4r8p2w5omnb9vddfkagllinvy39p5d7up2shjzor99ea0fp3k7n9uk513lnlbaiojvv1ipc8gfommadx2cm6yifun8uc6v0h41ysu4xgesk9fq4gxk5s23ltvcct1rmlx0gim44boreposikqpr6g26hgpfy3p7lvv8e1dxcufism28bvxmvqlp9lcgu1mjlt5uxzn31lfacak2fmr2rih2t97d4r6bj09n0k7md1vq1efe6ufipkhsxlczq16toxvkvmgqkq3l5x5n8zpct0f3cjsiodm37irxlgj5t94ztp1mka62y8p15wax6z2mrpgmi1r9wws1bqiqc91jbebewj18cz3j4jt0guyimab1cv79nz15vct2kywqc7p8zxozfdd7h20vetcp2v0jytiinzepvzqrawm3gckwiuwajcvo8ikmwfezn26b6czmgaud9gnwip3zwil1svh5hjqtzdfvmrukjcc6gqrnr8vwqqtum4isb7j0lhefnkkwscob3qq9x7judozb52vveoth5ni15ffkfe7b6mv3vo1a5k7dipc2sfhyrh2wkkzmgbu9a7lh3ip0fcwe8w8bfi3oviij5quz4qdjxaejqh68vngyaf8q7fnnn58tzrzqeaw9tegq2aay1u9bh7oe2k3s18pxact193osc9gewwytcbndpgw9om3ph77c24aa23dl0ctdnyz19vq5mlojxfj3uwqshmnzqhf9imjbr5xjpgyttu7eegr7jfq045wqludj9sg22kkkakkj3vmdypt9wa8zv9lrtrlh35lowezn6tbu0qhbny2f7vgtsphh672dwu19eqb73lb7ozdlczicadkdjod36mg460e1j8zxzov7nn4vpvsi0ua4b61uuefc30xo97actwhtz20mer1uafirm492f1q8us0cr6c7lmxa46f2zjngcuopnpslcbnmj0mo9ie6j0k14zpreig8bbtibvr4fkbhmbxdasp5fs2w5jkh069elk8fqmet3528d6gogqsaua3i4vx9ctpcw5qryymbpqq2m3pflhuosib7fzwtjgf53u29zw872op62leyy18dxw7quub9y84az5n6x4lvce2r4dwcam6aqv52g2pafj4fxwzx3yf16igwa0ln5ob0ns237ucfo7kvjpc3okunco4x0u1lf5u2qoo4jrkp7v8jies8s610x7j2mrw36v6delf7vccb13spnth638apcvec8kfzxdmb80lc5co1yel272sche6y4wbo1ku8evnx30npkl91mf6llmxnxg1ohu2nrgl7zljtj88vdbtsnr6rpd7mchvhg44jjnqpj9phiarc4ocizvvefiubx79vvc9mvmfotind7w6xo8a8lmjiposjbkvs36lczqi38g0x411enezpg8sayeubsh9iv9l2amttpqn45y5heoy999xczvorcq4zh7w59r64f6mjdst843cv717brogup9ijucccphm6grrb5qx1p8e3290obcto0h5s4tjpd9giax328hqarx8h0ujtopb4fttdp4nqygadzijo3xzt52s0qeojrbj6o9l35eyl6bjshandzgp80nyydwhbvb8oj8lk2l8gts6jhvpnd2r1a2imx6imc9mw3hdwje5swpjktwqjyqjd55wmwq7v9brbnsufqy7q3lqfmai6amzq1umk6x5u4soh7l1ccpm48jce9637cu83cfr8ihjdpb7v4xytpusfv1r1lmt4ugr6wbuoz5qlivjw2jp9tkgio23nszrwyj5n8jmmf1kh94wkjgusf9pi7yqh06hx9nzqrbwh47suimyxy1yommvmqibjw8qz0c0xgyifnrzrpweyojv71zmvmbnhtymft4r5bso1ax6vsveubj95781vnhe12wei2rkahrsc2zsm3cde526uiphruzoprswtntmgoa8n9bx35ksyrxpftakh2lsqgxe0qepgy1jqio5gjw9pplkgtwteyr28jrj3fys9dbufa0qd4b2cn9xajx3fknwcszwipqc9yhjb443l1xbgw9kpf3hyojclpmz4ymmr6wgl2q8duc9xyt6y766yfkrqw49tt3sffnjly895ek8c5anhsecdi2inoyt724jazti054x643zwzlxau74bilm6dylz2vl0nfjzizrxxn1y92phyy7jjgnij16bwh98yatoy8iylim2xtdngnq79a9s26a3jvjxd4253ujassx70bahsdk29s1pa8ifvg9b62jjr6in4m98mf5zspzn052hhbnz2ht5gyqoei9th4pwt0b8l7rxba87si7nu41vdn0m8fsmzdd2d64cn52srzxscbmzdtgdjt8rg2a0g5c6odfgckbxz80v4nt77848q24ti6zyzzwhoj2caoypn4mz2l1egqq7uzd53i87lpmfyt6fl9tcvm09nsfgndyyiritgz7kgc0ribed6tfm0obyepir6hqzvzpqx5fzg058gzeafrpchdne21pg1q409c15xanfopxe6bnbskuqtzcxmvnz8l4pnox1ld6vu2k3n0vcth807bcxyohbkd17gu9eovsifb4dkgr4ok1o9utpub07mn2cuc5fp98tlex7iatp1neopjbiuaf7yz02ggfkp4df64708fvlfckzo2q5fofwbpmlc1b2s60yzudsukl13ij8zii4ap1rimqrqfrmcljdtd47tnfd63kowrb5qn55qg7df6fs2qfxrwujl26agtn61lneamjqylmsmg66y1xgxu53obk9n10ey3ab1ryw74mvh1cegriityvtdipg4hccwi6qrbbblr4fq26h39b4qadtq1sepxcx1e960c6qzdzvidx6yyy774rh7nbukxil8a8tgtfvmyb2whubq4ok340ovot2dlwuvyaj6d7mtm9vj772s1p75bzq0iq710punjkarhbagokiduij38o9b8zhcymh9fqnl782kjdkzlzqtj1ts3itxxhzpay961fo3bmb0ft9s0d6h0duoge246c2df9b59xf31rqkycoag3orzjdmhw42mo40yvvir2vt5gaq89ancmwpfutxvysxklzssu1skbyyp5ycdw8w4utuz79613k3g1tnts842zilfazrxi27hcn3p2x8xe2cqaaas3esawne4r6b15jv14n94k71tx6awg6g9jdcremw6c48akot2vnmpza5p3y2llz2kry8tf1ezdzisbobdggc460za2whzs3i9dbmvfjeqozvpzjiwia 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:08.384 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:08.384 [2024-11-08 02:12:10.187764] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:08.384 [2024-11-08 02:12:10.187862] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73029 ] 00:08:08.384 { 00:08:08.384 "subsystems": [ 00:08:08.384 { 00:08:08.384 "subsystem": "bdev", 00:08:08.384 "config": [ 00:08:08.384 { 00:08:08.384 "params": { 00:08:08.384 "trtype": "pcie", 00:08:08.384 "traddr": "0000:00:10.0", 00:08:08.384 "name": "Nvme0" 00:08:08.384 }, 00:08:08.384 "method": "bdev_nvme_attach_controller" 00:08:08.384 }, 00:08:08.384 { 00:08:08.384 "method": "bdev_wait_for_examine" 00:08:08.384 } 00:08:08.384 ] 00:08:08.384 } 00:08:08.384 ] 00:08:08.384 } 00:08:08.643 [2024-11-08 02:12:10.327734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.643 [2024-11-08 02:12:10.368485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.643 [2024-11-08 02:12:10.401519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.643  [2024-11-08T02:12:10.785Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:08.901 00:08:08.901 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:08.901 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:08.901 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:08.901 02:12:10 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:08.901 { 00:08:08.901 "subsystems": [ 00:08:08.901 { 00:08:08.901 "subsystem": "bdev", 00:08:08.901 "config": [ 00:08:08.901 { 00:08:08.901 "params": { 00:08:08.901 "trtype": "pcie", 00:08:08.901 "traddr": "0000:00:10.0", 00:08:08.901 "name": "Nvme0" 00:08:08.901 }, 00:08:08.901 "method": "bdev_nvme_attach_controller" 00:08:08.901 }, 00:08:08.901 { 00:08:08.901 "method": "bdev_wait_for_examine" 00:08:08.901 } 00:08:08.901 ] 00:08:08.901 } 00:08:08.901 ] 00:08:08.901 } 00:08:08.901 [2024-11-08 02:12:10.673327] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:08.901 [2024-11-08 02:12:10.673580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73037 ] 00:08:09.160 [2024-11-08 02:12:10.811809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.161 [2024-11-08 02:12:10.845586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.161 [2024-11-08 02:12:10.873365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.161  [2024-11-08T02:12:11.304Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:09.420 00:08:09.420 02:12:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ plhv3tk1gaz9q88nfaqy892j1uws6r3xjfv6uuewgnzcf3b8hdwz5ed4zeo4sqn3n3m83yvgg1p5vcic337f4ltugntel69rdbj0babkbl6k94lrn40qajleyytijrc0t3p5vunk4c3jjtr73693s3h322ryazp5gs0ugx6nwxqmyw2jj7my03axiu6n1l5yx4dryi4vujsmj8f3ekmnt7ewrie553cxilvqgxidgm71jedwyt0yl21ve6r86qajwwgrek391fls49shgdvvk3e151g5dybneya20xio6pigqv22j0u1zvbp7i4qt2srpsmemg1i4no6fpr8pa1kb4e1tnyifr2qideaxoph8wb3hi7bjws3qsdjy0ybv0xoyrd7g8lo5gjfv4g60n356bikutz84o7jaerwxt6zndwxfun4ikc1ly0p9xdovox1439oaa98wt7ok1zwtghwglgozwmzqfrjp4mpxxrb4ydy2py9ap9kybyb4wl399va1qrjriztcf0cjt7ixze77pt5djydm2r6qknmzu57u2gxmybwu13r36z13rvgiraz682o1ejp72lirr9ew9ho74s3hzl6oqq30aw5fs38io8qmo68dmffdtrz1qh02i1ss3ff9q3f58r7d4j94p79hvmpnux2waixm3tnujbror0o1htuo57m599i3v29t1zucd83yugl77auuagneh8a0vfn2t098hqnhxi4oho2dapbrzq4t8cchel7327989an4u9b7arl48d91qc0d4r8p2w5omnb9vddfkagllinvy39p5d7up2shjzor99ea0fp3k7n9uk513lnlbaiojvv1ipc8gfommadx2cm6yifun8uc6v0h41ysu4xgesk9fq4gxk5s23ltvcct1rmlx0gim44boreposikqpr6g26hgpfy3p7lvv8e1dxcufism28bvxmvqlp9lcgu1mjlt5uxzn31lfacak2fmr2rih2t97d4r6bj09n0k7md1vq1efe6ufipkhsxlczq16toxvkvmgqkq3l5x5n8zpct0f3cjsiodm37irxlgj5t94ztp1mka62y8p15wax6z2mrpgmi1r9wws1bqiqc91jbebewj18cz3j4jt0guyimab1cv79nz15vct2kywqc7p8zxozfdd7h20vetcp2v0jytiinzepvzqrawm3gckwiuwajcvo8ikmwfezn26b6czmgaud9gnwip3zwil1svh5hjqtzdfvmrukjcc6gqrnr8vwqqtum4isb7j0lhefnkkwscob3qq9x7judozb52vveoth5ni15ffkfe7b6mv3vo1a5k7dipc2sfhyrh2wkkzmgbu9a7lh3ip0fcwe8w8bfi3oviij5quz4qdjxaejqh68vngyaf8q7fnnn58tzrzqeaw9tegq2aay1u9bh7oe2k3s18pxact193osc9gewwytcbndpgw9om3ph77c24aa23dl0ctdnyz19vq5mlojxfj3uwqshmnzqhf9imjbr5xjpgyttu7eegr7jfq045wqludj9sg22kkkakkj3vmdypt9wa8zv9lrtrlh35lowezn6tbu0qhbny2f7vgtsphh672dwu19eqb73lb7ozdlczicadkdjod36mg460e1j8zxzov7nn4vpvsi0ua4b61uuefc30xo97actwhtz20mer1uafirm492f1q8us0cr6c7lmxa46f2zjngcuopnpslcbnmj0mo9ie6j0k14zpreig8bbtibvr4fkbhmbxdasp5fs2w5jkh069elk8fqmet3528d6gogqsaua3i4vx9ctpcw5qryymbpqq2m3pflhuosib7fzwtjgf53u29zw872op62leyy18dxw7quub9y84az5n6x4lvce2r4dwcam6aqv52g2pafj4fxwzx3yf16igwa0ln5ob0ns237ucfo7kvjpc3okunco4x0u1lf5u2qoo4jrkp7v8jies8s610x7j2mrw36v6delf7vccb13spnth638apcvec8kfzxdmb80lc5co1yel272sche6y4wbo1ku8evnx30npkl91mf6llmxnxg1ohu2nrgl7zljtj88vdbtsnr6rpd7mchvhg44jjnqpj9phiarc4ocizvvefiubx79vvc9mvmfotind7w6xo8a8lmjiposjbkvs36lczqi38g0x411enezpg8sayeubsh9iv9l2amttpqn45y5heoy999xczvorcq4zh7w59r64f6mjdst843cv717brogup9ijucccphm6grrb5qx1p8e3290obcto0h5s4tjpd9giax328hqarx8h0ujtopb4fttdp4nqygadzijo3xzt52s0qeojrbj6o9l35eyl6bjshandzgp80nyydwhbvb8oj8lk2l8gts6jhvpnd2r1a2imx6imc9mw3hdwje5swpjktwqjyqjd55wmwq7v9brbnsufqy7q3lqfmai6amzq1umk6x5u4soh7l1ccpm48jce9637cu83cfr8ihjdpb7v4xytpusfv1r1lmt4ugr6wbuoz5qlivjw2jp9tkgio23nszrwyj5n8jmmf1kh94wkjgusf9pi7yqh06hx9nzqrbwh47suimyxy1yommvmqibjw8qz0c0xgyifnrzrpweyojv71zmvmbnhtymft4r5bso1ax6vsveubj95781vnhe12wei2rkahrsc2zsm3cde526uiphruzoprswtntmgoa8n9bx35ksyrxpftakh2lsqgxe0qepgy1jqio5gjw9pplkgtwteyr28jrj3fys9dbufa0qd4b2cn9xajx3fknwcszwipqc9yhjb443l1xbgw9kpf3hyojclpmz4ymmr6wgl2q8duc9xyt6y766yfkrqw49tt3sffnjly895ek8c5anhsecdi2inoyt724jazti054x643zwzlxau74bilm6dylz2vl0nfjzizrxxn1y92phyy7jjgnij16bwh98yatoy8iylim2xtdngnq79a9s26a3jvjxd4253ujassx70bahsdk29s1pa8ifvg9b62jjr6in4m98mf5zspzn052hhbnz2ht5gyqoei9th4pwt0b8l7rxba87si7nu41vdn0m8fsmzdd2d64cn52srzxscbmzdtgdjt8rg2a0g5c6odfgckbxz80v4nt77848q24ti6zyzzwhoj2caoypn4mz2l1egqq7uzd53i87lpmfyt6fl9tcvm09nsfgndyyiritgz7kgc0ribed6tfm0obyepir6hqzvzpqx5fzg058gzeafrpchdne21pg1q409c15xanfopxe6bnbskuqtzcxmvnz8l4pnox1ld6vu2k3n0vcth807bcxyohbkd17gu9eovsifb4dkgr4ok1o9utpub07mn2cuc5fp98tlex7iatp1neopjbiuaf7yz02ggfkp4df64708fvlfckzo2q5fofwbpmlc1b2s60yzudsukl13ij8zii4ap1rimqrqfrmcljdtd47tnfd63kowrb5qn55qg7df6fs2qfxrwujl26agtn61lneamjqylmsmg66y1xgxu53obk9n10ey3ab1ryw74mvh1cegriityvtdipg4hccwi6qrbbblr4fq26h39b4qadtq1sepxcx1e960c6qzdzvidx6yyy774rh7nbukxil8a8tgtfvmyb2whubq4ok340ovot2dlwuvyaj6d7mtm9vj772s1p75bzq0iq710punjkarhbagokiduij38o9b8zhcymh9fqnl782kjdkzlzqtj1ts3itxxhzpay961fo3bmb0ft9s0d6h0duoge246c2df9b59xf31rqkycoag3orzjdmhw42mo40yvvir2vt5gaq89ancmwpfutxvysxklzssu1skbyyp5ycdw8w4utuz79613k3g1tnts842zilfazrxi27hcn3p2x8xe2cqaaas3esawne4r6b15jv14n94k71tx6awg6g9jdcremw6c48akot2vnmpza5p3y2llz2kry8tf1ezdzisbobdggc460za2whzs3i9dbmvfjeqozvpzjiwia == \p\l\h\v\3\t\k\1\g\a\z\9\q\8\8\n\f\a\q\y\8\9\2\j\1\u\w\s\6\r\3\x\j\f\v\6\u\u\e\w\g\n\z\c\f\3\b\8\h\d\w\z\5\e\d\4\z\e\o\4\s\q\n\3\n\3\m\8\3\y\v\g\g\1\p\5\v\c\i\c\3\3\7\f\4\l\t\u\g\n\t\e\l\6\9\r\d\b\j\0\b\a\b\k\b\l\6\k\9\4\l\r\n\4\0\q\a\j\l\e\y\y\t\i\j\r\c\0\t\3\p\5\v\u\n\k\4\c\3\j\j\t\r\7\3\6\9\3\s\3\h\3\2\2\r\y\a\z\p\5\g\s\0\u\g\x\6\n\w\x\q\m\y\w\2\j\j\7\m\y\0\3\a\x\i\u\6\n\1\l\5\y\x\4\d\r\y\i\4\v\u\j\s\m\j\8\f\3\e\k\m\n\t\7\e\w\r\i\e\5\5\3\c\x\i\l\v\q\g\x\i\d\g\m\7\1\j\e\d\w\y\t\0\y\l\2\1\v\e\6\r\8\6\q\a\j\w\w\g\r\e\k\3\9\1\f\l\s\4\9\s\h\g\d\v\v\k\3\e\1\5\1\g\5\d\y\b\n\e\y\a\2\0\x\i\o\6\p\i\g\q\v\2\2\j\0\u\1\z\v\b\p\7\i\4\q\t\2\s\r\p\s\m\e\m\g\1\i\4\n\o\6\f\p\r\8\p\a\1\k\b\4\e\1\t\n\y\i\f\r\2\q\i\d\e\a\x\o\p\h\8\w\b\3\h\i\7\b\j\w\s\3\q\s\d\j\y\0\y\b\v\0\x\o\y\r\d\7\g\8\l\o\5\g\j\f\v\4\g\6\0\n\3\5\6\b\i\k\u\t\z\8\4\o\7\j\a\e\r\w\x\t\6\z\n\d\w\x\f\u\n\4\i\k\c\1\l\y\0\p\9\x\d\o\v\o\x\1\4\3\9\o\a\a\9\8\w\t\7\o\k\1\z\w\t\g\h\w\g\l\g\o\z\w\m\z\q\f\r\j\p\4\m\p\x\x\r\b\4\y\d\y\2\p\y\9\a\p\9\k\y\b\y\b\4\w\l\3\9\9\v\a\1\q\r\j\r\i\z\t\c\f\0\c\j\t\7\i\x\z\e\7\7\p\t\5\d\j\y\d\m\2\r\6\q\k\n\m\z\u\5\7\u\2\g\x\m\y\b\w\u\1\3\r\3\6\z\1\3\r\v\g\i\r\a\z\6\8\2\o\1\e\j\p\7\2\l\i\r\r\9\e\w\9\h\o\7\4\s\3\h\z\l\6\o\q\q\3\0\a\w\5\f\s\3\8\i\o\8\q\m\o\6\8\d\m\f\f\d\t\r\z\1\q\h\0\2\i\1\s\s\3\f\f\9\q\3\f\5\8\r\7\d\4\j\9\4\p\7\9\h\v\m\p\n\u\x\2\w\a\i\x\m\3\t\n\u\j\b\r\o\r\0\o\1\h\t\u\o\5\7\m\5\9\9\i\3\v\2\9\t\1\z\u\c\d\8\3\y\u\g\l\7\7\a\u\u\a\g\n\e\h\8\a\0\v\f\n\2\t\0\9\8\h\q\n\h\x\i\4\o\h\o\2\d\a\p\b\r\z\q\4\t\8\c\c\h\e\l\7\3\2\7\9\8\9\a\n\4\u\9\b\7\a\r\l\4\8\d\9\1\q\c\0\d\4\r\8\p\2\w\5\o\m\n\b\9\v\d\d\f\k\a\g\l\l\i\n\v\y\3\9\p\5\d\7\u\p\2\s\h\j\z\o\r\9\9\e\a\0\f\p\3\k\7\n\9\u\k\5\1\3\l\n\l\b\a\i\o\j\v\v\1\i\p\c\8\g\f\o\m\m\a\d\x\2\c\m\6\y\i\f\u\n\8\u\c\6\v\0\h\4\1\y\s\u\4\x\g\e\s\k\9\f\q\4\g\x\k\5\s\2\3\l\t\v\c\c\t\1\r\m\l\x\0\g\i\m\4\4\b\o\r\e\p\o\s\i\k\q\p\r\6\g\2\6\h\g\p\f\y\3\p\7\l\v\v\8\e\1\d\x\c\u\f\i\s\m\2\8\b\v\x\m\v\q\l\p\9\l\c\g\u\1\m\j\l\t\5\u\x\z\n\3\1\l\f\a\c\a\k\2\f\m\r\2\r\i\h\2\t\9\7\d\4\r\6\b\j\0\9\n\0\k\7\m\d\1\v\q\1\e\f\e\6\u\f\i\p\k\h\s\x\l\c\z\q\1\6\t\o\x\v\k\v\m\g\q\k\q\3\l\5\x\5\n\8\z\p\c\t\0\f\3\c\j\s\i\o\d\m\3\7\i\r\x\l\g\j\5\t\9\4\z\t\p\1\m\k\a\6\2\y\8\p\1\5\w\a\x\6\z\2\m\r\p\g\m\i\1\r\9\w\w\s\1\b\q\i\q\c\9\1\j\b\e\b\e\w\j\1\8\c\z\3\j\4\j\t\0\g\u\y\i\m\a\b\1\c\v\7\9\n\z\1\5\v\c\t\2\k\y\w\q\c\7\p\8\z\x\o\z\f\d\d\7\h\2\0\v\e\t\c\p\2\v\0\j\y\t\i\i\n\z\e\p\v\z\q\r\a\w\m\3\g\c\k\w\i\u\w\a\j\c\v\o\8\i\k\m\w\f\e\z\n\2\6\b\6\c\z\m\g\a\u\d\9\g\n\w\i\p\3\z\w\i\l\1\s\v\h\5\h\j\q\t\z\d\f\v\m\r\u\k\j\c\c\6\g\q\r\n\r\8\v\w\q\q\t\u\m\4\i\s\b\7\j\0\l\h\e\f\n\k\k\w\s\c\o\b\3\q\q\9\x\7\j\u\d\o\z\b\5\2\v\v\e\o\t\h\5\n\i\1\5\f\f\k\f\e\7\b\6\m\v\3\v\o\1\a\5\k\7\d\i\p\c\2\s\f\h\y\r\h\2\w\k\k\z\m\g\b\u\9\a\7\l\h\3\i\p\0\f\c\w\e\8\w\8\b\f\i\3\o\v\i\i\j\5\q\u\z\4\q\d\j\x\a\e\j\q\h\6\8\v\n\g\y\a\f\8\q\7\f\n\n\n\5\8\t\z\r\z\q\e\a\w\9\t\e\g\q\2\a\a\y\1\u\9\b\h\7\o\e\2\k\3\s\1\8\p\x\a\c\t\1\9\3\o\s\c\9\g\e\w\w\y\t\c\b\n\d\p\g\w\9\o\m\3\p\h\7\7\c\2\4\a\a\2\3\d\l\0\c\t\d\n\y\z\1\9\v\q\5\m\l\o\j\x\f\j\3\u\w\q\s\h\m\n\z\q\h\f\9\i\m\j\b\r\5\x\j\p\g\y\t\t\u\7\e\e\g\r\7\j\f\q\0\4\5\w\q\l\u\d\j\9\s\g\2\2\k\k\k\a\k\k\j\3\v\m\d\y\p\t\9\w\a\8\z\v\9\l\r\t\r\l\h\3\5\l\o\w\e\z\n\6\t\b\u\0\q\h\b\n\y\2\f\7\v\g\t\s\p\h\h\6\7\2\d\w\u\1\9\e\q\b\7\3\l\b\7\o\z\d\l\c\z\i\c\a\d\k\d\j\o\d\3\6\m\g\4\6\0\e\1\j\8\z\x\z\o\v\7\n\n\4\v\p\v\s\i\0\u\a\4\b\6\1\u\u\e\f\c\3\0\x\o\9\7\a\c\t\w\h\t\z\2\0\m\e\r\1\u\a\f\i\r\m\4\9\2\f\1\q\8\u\s\0\c\r\6\c\7\l\m\x\a\4\6\f\2\z\j\n\g\c\u\o\p\n\p\s\l\c\b\n\m\j\0\m\o\9\i\e\6\j\0\k\1\4\z\p\r\e\i\g\8\b\b\t\i\b\v\r\4\f\k\b\h\m\b\x\d\a\s\p\5\f\s\2\w\5\j\k\h\0\6\9\e\l\k\8\f\q\m\e\t\3\5\2\8\d\6\g\o\g\q\s\a\u\a\3\i\4\v\x\9\c\t\p\c\w\5\q\r\y\y\m\b\p\q\q\2\m\3\p\f\l\h\u\o\s\i\b\7\f\z\w\t\j\g\f\5\3\u\2\9\z\w\8\7\2\o\p\6\2\l\e\y\y\1\8\d\x\w\7\q\u\u\b\9\y\8\4\a\z\5\n\6\x\4\l\v\c\e\2\r\4\d\w\c\a\m\6\a\q\v\5\2\g\2\p\a\f\j\4\f\x\w\z\x\3\y\f\1\6\i\g\w\a\0\l\n\5\o\b\0\n\s\2\3\7\u\c\f\o\7\k\v\j\p\c\3\o\k\u\n\c\o\4\x\0\u\1\l\f\5\u\2\q\o\o\4\j\r\k\p\7\v\8\j\i\e\s\8\s\6\1\0\x\7\j\2\m\r\w\3\6\v\6\d\e\l\f\7\v\c\c\b\1\3\s\p\n\t\h\6\3\8\a\p\c\v\e\c\8\k\f\z\x\d\m\b\8\0\l\c\5\c\o\1\y\e\l\2\7\2\s\c\h\e\6\y\4\w\b\o\1\k\u\8\e\v\n\x\3\0\n\p\k\l\9\1\m\f\6\l\l\m\x\n\x\g\1\o\h\u\2\n\r\g\l\7\z\l\j\t\j\8\8\v\d\b\t\s\n\r\6\r\p\d\7\m\c\h\v\h\g\4\4\j\j\n\q\p\j\9\p\h\i\a\r\c\4\o\c\i\z\v\v\e\f\i\u\b\x\7\9\v\v\c\9\m\v\m\f\o\t\i\n\d\7\w\6\x\o\8\a\8\l\m\j\i\p\o\s\j\b\k\v\s\3\6\l\c\z\q\i\3\8\g\0\x\4\1\1\e\n\e\z\p\g\8\s\a\y\e\u\b\s\h\9\i\v\9\l\2\a\m\t\t\p\q\n\4\5\y\5\h\e\o\y\9\9\9\x\c\z\v\o\r\c\q\4\z\h\7\w\5\9\r\6\4\f\6\m\j\d\s\t\8\4\3\c\v\7\1\7\b\r\o\g\u\p\9\i\j\u\c\c\c\p\h\m\6\g\r\r\b\5\q\x\1\p\8\e\3\2\9\0\o\b\c\t\o\0\h\5\s\4\t\j\p\d\9\g\i\a\x\3\2\8\h\q\a\r\x\8\h\0\u\j\t\o\p\b\4\f\t\t\d\p\4\n\q\y\g\a\d\z\i\j\o\3\x\z\t\5\2\s\0\q\e\o\j\r\b\j\6\o\9\l\3\5\e\y\l\6\b\j\s\h\a\n\d\z\g\p\8\0\n\y\y\d\w\h\b\v\b\8\o\j\8\l\k\2\l\8\g\t\s\6\j\h\v\p\n\d\2\r\1\a\2\i\m\x\6\i\m\c\9\m\w\3\h\d\w\j\e\5\s\w\p\j\k\t\w\q\j\y\q\j\d\5\5\w\m\w\q\7\v\9\b\r\b\n\s\u\f\q\y\7\q\3\l\q\f\m\a\i\6\a\m\z\q\1\u\m\k\6\x\5\u\4\s\o\h\7\l\1\c\c\p\m\4\8\j\c\e\9\6\3\7\c\u\8\3\c\f\r\8\i\h\j\d\p\b\7\v\4\x\y\t\p\u\s\f\v\1\r\1\l\m\t\4\u\g\r\6\w\b\u\o\z\5\q\l\i\v\j\w\2\j\p\9\t\k\g\i\o\2\3\n\s\z\r\w\y\j\5\n\8\j\m\m\f\1\k\h\9\4\w\k\j\g\u\s\f\9\p\i\7\y\q\h\0\6\h\x\9\n\z\q\r\b\w\h\4\7\s\u\i\m\y\x\y\1\y\o\m\m\v\m\q\i\b\j\w\8\q\z\0\c\0\x\g\y\i\f\n\r\z\r\p\w\e\y\o\j\v\7\1\z\m\v\m\b\n\h\t\y\m\f\t\4\r\5\b\s\o\1\a\x\6\v\s\v\e\u\b\j\9\5\7\8\1\v\n\h\e\1\2\w\e\i\2\r\k\a\h\r\s\c\2\z\s\m\3\c\d\e\5\2\6\u\i\p\h\r\u\z\o\p\r\s\w\t\n\t\m\g\o\a\8\n\9\b\x\3\5\k\s\y\r\x\p\f\t\a\k\h\2\l\s\q\g\x\e\0\q\e\p\g\y\1\j\q\i\o\5\g\j\w\9\p\p\l\k\g\t\w\t\e\y\r\2\8\j\r\j\3\f\y\s\9\d\b\u\f\a\0\q\d\4\b\2\c\n\9\x\a\j\x\3\f\k\n\w\c\s\z\w\i\p\q\c\9\y\h\j\b\4\4\3\l\1\x\b\g\w\9\k\p\f\3\h\y\o\j\c\l\p\m\z\4\y\m\m\r\6\w\g\l\2\q\8\d\u\c\9\x\y\t\6\y\7\6\6\y\f\k\r\q\w\4\9\t\t\3\s\f\f\n\j\l\y\8\9\5\e\k\8\c\5\a\n\h\s\e\c\d\i\2\i\n\o\y\t\7\2\4\j\a\z\t\i\0\5\4\x\6\4\3\z\w\z\l\x\a\u\7\4\b\i\l\m\6\d\y\l\z\2\v\l\0\n\f\j\z\i\z\r\x\x\n\1\y\9\2\p\h\y\y\7\j\j\g\n\i\j\1\6\b\w\h\9\8\y\a\t\o\y\8\i\y\l\i\m\2\x\t\d\n\g\n\q\7\9\a\9\s\2\6\a\3\j\v\j\x\d\4\2\5\3\u\j\a\s\s\x\7\0\b\a\h\s\d\k\2\9\s\1\p\a\8\i\f\v\g\9\b\6\2\j\j\r\6\i\n\4\m\9\8\m\f\5\z\s\p\z\n\0\5\2\h\h\b\n\z\2\h\t\5\g\y\q\o\e\i\9\t\h\4\p\w\t\0\b\8\l\7\r\x\b\a\8\7\s\i\7\n\u\4\1\v\d\n\0\m\8\f\s\m\z\d\d\2\d\6\4\c\n\5\2\s\r\z\x\s\c\b\m\z\d\t\g\d\j\t\8\r\g\2\a\0\g\5\c\6\o\d\f\g\c\k\b\x\z\8\0\v\4\n\t\7\7\8\4\8\q\2\4\t\i\6\z\y\z\z\w\h\o\j\2\c\a\o\y\p\n\4\m\z\2\l\1\e\g\q\q\7\u\z\d\5\3\i\8\7\l\p\m\f\y\t\6\f\l\9\t\c\v\m\0\9\n\s\f\g\n\d\y\y\i\r\i\t\g\z\7\k\g\c\0\r\i\b\e\d\6\t\f\m\0\o\b\y\e\p\i\r\6\h\q\z\v\z\p\q\x\5\f\z\g\0\5\8\g\z\e\a\f\r\p\c\h\d\n\e\2\1\p\g\1\q\4\0\9\c\1\5\x\a\n\f\o\p\x\e\6\b\n\b\s\k\u\q\t\z\c\x\m\v\n\z\8\l\4\p\n\o\x\1\l\d\6\v\u\2\k\3\n\0\v\c\t\h\8\0\7\b\c\x\y\o\h\b\k\d\1\7\g\u\9\e\o\v\s\i\f\b\4\d\k\g\r\4\o\k\1\o\9\u\t\p\u\b\0\7\m\n\2\c\u\c\5\f\p\9\8\t\l\e\x\7\i\a\t\p\1\n\e\o\p\j\b\i\u\a\f\7\y\z\0\2\g\g\f\k\p\4\d\f\6\4\7\0\8\f\v\l\f\c\k\z\o\2\q\5\f\o\f\w\b\p\m\l\c\1\b\2\s\6\0\y\z\u\d\s\u\k\l\1\3\i\j\8\z\i\i\4\a\p\1\r\i\m\q\r\q\f\r\m\c\l\j\d\t\d\4\7\t\n\f\d\6\3\k\o\w\r\b\5\q\n\5\5\q\g\7\d\f\6\f\s\2\q\f\x\r\w\u\j\l\2\6\a\g\t\n\6\1\l\n\e\a\m\j\q\y\l\m\s\m\g\6\6\y\1\x\g\x\u\5\3\o\b\k\9\n\1\0\e\y\3\a\b\1\r\y\w\7\4\m\v\h\1\c\e\g\r\i\i\t\y\v\t\d\i\p\g\4\h\c\c\w\i\6\q\r\b\b\b\l\r\4\f\q\2\6\h\3\9\b\4\q\a\d\t\q\1\s\e\p\x\c\x\1\e\9\6\0\c\6\q\z\d\z\v\i\d\x\6\y\y\y\7\7\4\r\h\7\n\b\u\k\x\i\l\8\a\8\t\g\t\f\v\m\y\b\2\w\h\u\b\q\4\o\k\3\4\0\o\v\o\t\2\d\l\w\u\v\y\a\j\6\d\7\m\t\m\9\v\j\7\7\2\s\1\p\7\5\b\z\q\0\i\q\7\1\0\p\u\n\j\k\a\r\h\b\a\g\o\k\i\d\u\i\j\3\8\o\9\b\8\z\h\c\y\m\h\9\f\q\n\l\7\8\2\k\j\d\k\z\l\z\q\t\j\1\t\s\3\i\t\x\x\h\z\p\a\y\9\6\1\f\o\3\b\m\b\0\f\t\9\s\0\d\6\h\0\d\u\o\g\e\2\4\6\c\2\d\f\9\b\5\9\x\f\3\1\r\q\k\y\c\o\a\g\3\o\r\z\j\d\m\h\w\4\2\m\o\4\0\y\v\v\i\r\2\v\t\5\g\a\q\8\9\a\n\c\m\w\p\f\u\t\x\v\y\s\x\k\l\z\s\s\u\1\s\k\b\y\y\p\5\y\c\d\w\8\w\4\u\t\u\z\7\9\6\1\3\k\3\g\1\t\n\t\s\8\4\2\z\i\l\f\a\z\r\x\i\2\7\h\c\n\3\p\2\x\8\x\e\2\c\q\a\a\a\s\3\e\s\a\w\n\e\4\r\6\b\1\5\j\v\1\4\n\9\4\k\7\1\t\x\6\a\w\g\6\g\9\j\d\c\r\e\m\w\6\c\4\8\a\k\o\t\2\v\n\m\p\z\a\5\p\3\y\2\l\l\z\2\k\r\y\8\t\f\1\e\z\d\z\i\s\b\o\b\d\g\g\c\4\6\0\z\a\2\w\h\z\s\3\i\9\d\b\m\v\f\j\e\q\o\z\v\p\z\j\i\w\i\a ]] 00:08:09.421 00:08:09.421 real 0m1.024s 00:08:09.421 user 0m0.695s 00:08:09.421 sys 0m0.391s 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:09.421 ************************************ 00:08:09.421 END TEST dd_rw_offset 00:08:09.421 ************************************ 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:09.421 02:12:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.421 [2024-11-08 02:12:11.198331] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:09.421 [2024-11-08 02:12:11.198431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73072 ] 00:08:09.421 { 00:08:09.421 "subsystems": [ 00:08:09.421 { 00:08:09.421 "subsystem": "bdev", 00:08:09.421 "config": [ 00:08:09.421 { 00:08:09.421 "params": { 00:08:09.421 "trtype": "pcie", 00:08:09.421 "traddr": "0000:00:10.0", 00:08:09.421 "name": "Nvme0" 00:08:09.421 }, 00:08:09.421 "method": "bdev_nvme_attach_controller" 00:08:09.421 }, 00:08:09.421 { 00:08:09.421 "method": "bdev_wait_for_examine" 00:08:09.421 } 00:08:09.421 ] 00:08:09.421 } 00:08:09.421 ] 00:08:09.421 } 00:08:09.680 [2024-11-08 02:12:11.335143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.681 [2024-11-08 02:12:11.376988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.681 [2024-11-08 02:12:11.413465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.681  [2024-11-08T02:12:11.823Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:09.939 00:08:09.939 02:12:11 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.939 ************************************ 00:08:09.939 END TEST spdk_dd_basic_rw 00:08:09.939 ************************************ 00:08:09.939 00:08:09.939 real 0m14.722s 00:08:09.939 user 0m10.585s 00:08:09.939 sys 0m4.724s 00:08:09.939 02:12:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.939 02:12:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.939 02:12:11 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:09.939 02:12:11 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.939 02:12:11 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.939 02:12:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:09.939 ************************************ 00:08:09.939 START TEST spdk_dd_posix 00:08:09.939 ************************************ 00:08:09.939 02:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:09.939 * Looking for test storage... 00:08:09.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:09.939 02:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:09.939 02:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:09.939 02:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:08:10.198 02:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:10.198 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.198 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.198 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.198 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.198 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.198 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.198 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.198 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.198 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.198 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.198 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:10.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.199 --rc genhtml_branch_coverage=1 00:08:10.199 --rc genhtml_function_coverage=1 00:08:10.199 --rc genhtml_legend=1 00:08:10.199 --rc geninfo_all_blocks=1 00:08:10.199 --rc geninfo_unexecuted_blocks=1 00:08:10.199 00:08:10.199 ' 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:10.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.199 --rc genhtml_branch_coverage=1 00:08:10.199 --rc genhtml_function_coverage=1 00:08:10.199 --rc genhtml_legend=1 00:08:10.199 --rc geninfo_all_blocks=1 00:08:10.199 --rc geninfo_unexecuted_blocks=1 00:08:10.199 00:08:10.199 ' 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:10.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.199 --rc genhtml_branch_coverage=1 00:08:10.199 --rc genhtml_function_coverage=1 00:08:10.199 --rc genhtml_legend=1 00:08:10.199 --rc geninfo_all_blocks=1 00:08:10.199 --rc geninfo_unexecuted_blocks=1 00:08:10.199 00:08:10.199 ' 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:10.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.199 --rc genhtml_branch_coverage=1 00:08:10.199 --rc genhtml_function_coverage=1 00:08:10.199 --rc genhtml_legend=1 00:08:10.199 --rc geninfo_all_blocks=1 00:08:10.199 --rc geninfo_unexecuted_blocks=1 00:08:10.199 00:08:10.199 ' 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:10.199 * First test run, liburing in use 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:10.199 ************************************ 00:08:10.199 START TEST dd_flag_append 00:08:10.199 ************************************ 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=hrgjln0itvueqd20fs6012nw1wg3ihvb 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=lk4fag8lxztjo1n1sf4qa2aduikv45o3 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s hrgjln0itvueqd20fs6012nw1wg3ihvb 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s lk4fag8lxztjo1n1sf4qa2aduikv45o3 00:08:10.199 02:12:11 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:10.199 [2024-11-08 02:12:11.966355] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:10.199 [2024-11-08 02:12:11.966463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73138 ] 00:08:10.458 [2024-11-08 02:12:12.104574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.458 [2024-11-08 02:12:12.142553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.458 [2024-11-08 02:12:12.180926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.458  [2024-11-08T02:12:12.342Z] Copying: 32/32 [B] (average 31 kBps) 00:08:10.458 00:08:10.458 ************************************ 00:08:10.458 END TEST dd_flag_append 00:08:10.458 ************************************ 00:08:10.458 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ lk4fag8lxztjo1n1sf4qa2aduikv45o3hrgjln0itvueqd20fs6012nw1wg3ihvb == \l\k\4\f\a\g\8\l\x\z\t\j\o\1\n\1\s\f\4\q\a\2\a\d\u\i\k\v\4\5\o\3\h\r\g\j\l\n\0\i\t\v\u\e\q\d\2\0\f\s\6\0\1\2\n\w\1\w\g\3\i\h\v\b ]] 00:08:10.458 00:08:10.458 real 0m0.420s 00:08:10.458 user 0m0.208s 00:08:10.458 sys 0m0.171s 00:08:10.458 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.458 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:10.717 ************************************ 00:08:10.717 START TEST dd_flag_directory 00:08:10.717 ************************************ 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.717 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:10.717 [2024-11-08 02:12:12.431381] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:10.717 [2024-11-08 02:12:12.431482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73167 ] 00:08:10.717 [2024-11-08 02:12:12.569887] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.976 [2024-11-08 02:12:12.611533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.976 [2024-11-08 02:12:12.644623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.976 [2024-11-08 02:12:12.660054] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:10.976 [2024-11-08 02:12:12.660166] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:10.976 [2024-11-08 02:12:12.660181] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.976 [2024-11-08 02:12:12.723293] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:10.976 02:12:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:10.976 [2024-11-08 02:12:12.851408] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:10.976 [2024-11-08 02:12:12.851490] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73171 ] 00:08:11.235 [2024-11-08 02:12:12.985748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.235 [2024-11-08 02:12:13.030786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.235 [2024-11-08 02:12:13.062454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.235 [2024-11-08 02:12:13.078225] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:11.235 [2024-11-08 02:12:13.078276] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:11.235 [2024-11-08 02:12:13.078305] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.494 [2024-11-08 02:12:13.138400] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:08:11.494 ************************************ 00:08:11.494 END TEST dd_flag_directory 00:08:11.494 ************************************ 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.494 00:08:11.494 real 0m0.843s 00:08:11.494 user 0m0.421s 00:08:11.494 sys 0m0.213s 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:11.494 ************************************ 00:08:11.494 START TEST dd_flag_nofollow 00:08:11.494 ************************************ 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:11.494 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.494 [2024-11-08 02:12:13.327786] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:11.494 [2024-11-08 02:12:13.327875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73205 ] 00:08:11.753 [2024-11-08 02:12:13.468298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.753 [2024-11-08 02:12:13.510604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.753 [2024-11-08 02:12:13.542895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.753 [2024-11-08 02:12:13.558255] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:11.753 [2024-11-08 02:12:13.558305] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:11.753 [2024-11-08 02:12:13.558334] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.753 [2024-11-08 02:12:13.617368] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:12.013 02:12:13 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:12.013 [2024-11-08 02:12:13.744516] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:12.013 [2024-11-08 02:12:13.744801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73209 ] 00:08:12.013 [2024-11-08 02:12:13.885429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.271 [2024-11-08 02:12:13.923747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.272 [2024-11-08 02:12:13.951825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.272 [2024-11-08 02:12:13.967739] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:12.272 [2024-11-08 02:12:13.967787] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:12.272 [2024-11-08 02:12:13.967817] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:12.272 [2024-11-08 02:12:14.028360] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:12.272 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:08:12.272 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.272 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:08:12.272 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:08:12.272 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:08:12.272 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.272 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:12.272 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:12.272 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:12.272 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:12.531 [2024-11-08 02:12:14.165478] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:12.531 [2024-11-08 02:12:14.165581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73216 ] 00:08:12.531 [2024-11-08 02:12:14.302440] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.531 [2024-11-08 02:12:14.342902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.531 [2024-11-08 02:12:14.373975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.531  [2024-11-08T02:12:14.674Z] Copying: 512/512 [B] (average 500 kBps) 00:08:12.790 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ iyjoy9t8lkzbzu05zyt37y373ys9gm4ytngw6at0ocu0k8fvgbyix141km5r0a11fiwddx1htbsofodg1iixs5nbfbyenb240ms2v5mr3tjalu40rwq5mpb7st23qcv4rw9vtp30pseipd609rr9bgriblw56d8wcpalkfslgz6j2n0urat445w5p6jvw8fj9f1qizp383k2flmd9l7mp8z54a69etznjqhmh73elqbash1szfxf1tw98g3h3s06r19i6nej730yd4mnsplxlxb0stgmby094xtglrprpw2fhrnr6mj5sgcgzw8pagmzja5skto7wmn1qznn3yldqbm9es6gzrk6xm0ksww7fawdm5uhvdx711hnigh756x950gnf9mucgxd56ccu9bbl690l6m1x9c9eal383pvhrg2lkqhngmnq3dury95loq7mpvt5z4qp5r72vrcwnqwqhpagsj7r8oeminvs4v9qwke0gdvlol49hlvzb708qw2 == \i\y\j\o\y\9\t\8\l\k\z\b\z\u\0\5\z\y\t\3\7\y\3\7\3\y\s\9\g\m\4\y\t\n\g\w\6\a\t\0\o\c\u\0\k\8\f\v\g\b\y\i\x\1\4\1\k\m\5\r\0\a\1\1\f\i\w\d\d\x\1\h\t\b\s\o\f\o\d\g\1\i\i\x\s\5\n\b\f\b\y\e\n\b\2\4\0\m\s\2\v\5\m\r\3\t\j\a\l\u\4\0\r\w\q\5\m\p\b\7\s\t\2\3\q\c\v\4\r\w\9\v\t\p\3\0\p\s\e\i\p\d\6\0\9\r\r\9\b\g\r\i\b\l\w\5\6\d\8\w\c\p\a\l\k\f\s\l\g\z\6\j\2\n\0\u\r\a\t\4\4\5\w\5\p\6\j\v\w\8\f\j\9\f\1\q\i\z\p\3\8\3\k\2\f\l\m\d\9\l\7\m\p\8\z\5\4\a\6\9\e\t\z\n\j\q\h\m\h\7\3\e\l\q\b\a\s\h\1\s\z\f\x\f\1\t\w\9\8\g\3\h\3\s\0\6\r\1\9\i\6\n\e\j\7\3\0\y\d\4\m\n\s\p\l\x\l\x\b\0\s\t\g\m\b\y\0\9\4\x\t\g\l\r\p\r\p\w\2\f\h\r\n\r\6\m\j\5\s\g\c\g\z\w\8\p\a\g\m\z\j\a\5\s\k\t\o\7\w\m\n\1\q\z\n\n\3\y\l\d\q\b\m\9\e\s\6\g\z\r\k\6\x\m\0\k\s\w\w\7\f\a\w\d\m\5\u\h\v\d\x\7\1\1\h\n\i\g\h\7\5\6\x\9\5\0\g\n\f\9\m\u\c\g\x\d\5\6\c\c\u\9\b\b\l\6\9\0\l\6\m\1\x\9\c\9\e\a\l\3\8\3\p\v\h\r\g\2\l\k\q\h\n\g\m\n\q\3\d\u\r\y\9\5\l\o\q\7\m\p\v\t\5\z\4\q\p\5\r\7\2\v\r\c\w\n\q\w\q\h\p\a\g\s\j\7\r\8\o\e\m\i\n\v\s\4\v\9\q\w\k\e\0\g\d\v\l\o\l\4\9\h\l\v\z\b\7\0\8\q\w\2 ]] 00:08:12.790 00:08:12.790 real 0m1.263s 00:08:12.790 user 0m0.612s 00:08:12.790 sys 0m0.411s 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.790 ************************************ 00:08:12.790 END TEST dd_flag_nofollow 00:08:12.790 ************************************ 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:12.790 ************************************ 00:08:12.790 START TEST dd_flag_noatime 00:08:12.790 ************************************ 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731031934 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731031934 00:08:12.790 02:12:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:13.726 02:12:15 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.985 [2024-11-08 02:12:15.659972] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:13.985 [2024-11-08 02:12:15.660267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73259 ] 00:08:13.985 [2024-11-08 02:12:15.802035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.985 [2024-11-08 02:12:15.845625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.244 [2024-11-08 02:12:15.880444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.244  [2024-11-08T02:12:16.128Z] Copying: 512/512 [B] (average 500 kBps) 00:08:14.244 00:08:14.244 02:12:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.244 02:12:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731031934 )) 00:08:14.244 02:12:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.244 02:12:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731031934 )) 00:08:14.244 02:12:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.244 [2024-11-08 02:12:16.106550] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:14.244 [2024-11-08 02:12:16.106840] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73272 ] 00:08:14.503 [2024-11-08 02:12:16.246589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.503 [2024-11-08 02:12:16.281020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.503 [2024-11-08 02:12:16.307081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.503  [2024-11-08T02:12:16.646Z] Copying: 512/512 [B] (average 500 kBps) 00:08:14.762 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.762 ************************************ 00:08:14.762 END TEST dd_flag_noatime 00:08:14.762 ************************************ 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731031936 )) 00:08:14.762 00:08:14.762 real 0m1.877s 00:08:14.762 user 0m0.439s 00:08:14.762 sys 0m0.385s 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:14.762 ************************************ 00:08:14.762 START TEST dd_flags_misc 00:08:14.762 ************************************ 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:14.762 02:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:14.762 [2024-11-08 02:12:16.571538] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:14.763 [2024-11-08 02:12:16.571786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73301 ] 00:08:15.022 [2024-11-08 02:12:16.710790] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.022 [2024-11-08 02:12:16.741935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.022 [2024-11-08 02:12:16.767910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.022  [2024-11-08T02:12:17.166Z] Copying: 512/512 [B] (average 500 kBps) 00:08:15.282 00:08:15.282 02:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qlqx9s6g3uksqrfzp56e8g511z0msc731pntm69p450901ttzy9ink1zo99or5frerlcihy6u8o0qlisy0xspnlifp5qzg2qpnmsoxml7t3asdiehodxcw7wyj25q4atq6ekxcme0ep5ppibxdc4tdfjha32ydikjs2z4zjho9m5jgjmr4h2gvqarg92p9vydp05vp7hgqfbjtsqz6u2apbsuvysdqtnzf7vk9zkrargdl7wv9wnn4tbbyjsr89q30qnmrxnbimhqoejl5wh06pvo7n6271urvhz7nt1djlpp1nkd17k1jlqugijs8jucrhjsigel2wgy4mlrdrmuk1k8ynzne2r5n1dbuf02zwgxthpmo7sd2u8iiq6oxivjot7h16xeogypkofba9jfnoibtda6wyegbivp1x5nxiprp3v8ap7yvf99fkfajcexzntosuf6z6k55geslkkwtly2ni2dgcmhsqx0796rf88kd3hdakamyg386s4vi17 == \q\l\q\x\9\s\6\g\3\u\k\s\q\r\f\z\p\5\6\e\8\g\5\1\1\z\0\m\s\c\7\3\1\p\n\t\m\6\9\p\4\5\0\9\0\1\t\t\z\y\9\i\n\k\1\z\o\9\9\o\r\5\f\r\e\r\l\c\i\h\y\6\u\8\o\0\q\l\i\s\y\0\x\s\p\n\l\i\f\p\5\q\z\g\2\q\p\n\m\s\o\x\m\l\7\t\3\a\s\d\i\e\h\o\d\x\c\w\7\w\y\j\2\5\q\4\a\t\q\6\e\k\x\c\m\e\0\e\p\5\p\p\i\b\x\d\c\4\t\d\f\j\h\a\3\2\y\d\i\k\j\s\2\z\4\z\j\h\o\9\m\5\j\g\j\m\r\4\h\2\g\v\q\a\r\g\9\2\p\9\v\y\d\p\0\5\v\p\7\h\g\q\f\b\j\t\s\q\z\6\u\2\a\p\b\s\u\v\y\s\d\q\t\n\z\f\7\v\k\9\z\k\r\a\r\g\d\l\7\w\v\9\w\n\n\4\t\b\b\y\j\s\r\8\9\q\3\0\q\n\m\r\x\n\b\i\m\h\q\o\e\j\l\5\w\h\0\6\p\v\o\7\n\6\2\7\1\u\r\v\h\z\7\n\t\1\d\j\l\p\p\1\n\k\d\1\7\k\1\j\l\q\u\g\i\j\s\8\j\u\c\r\h\j\s\i\g\e\l\2\w\g\y\4\m\l\r\d\r\m\u\k\1\k\8\y\n\z\n\e\2\r\5\n\1\d\b\u\f\0\2\z\w\g\x\t\h\p\m\o\7\s\d\2\u\8\i\i\q\6\o\x\i\v\j\o\t\7\h\1\6\x\e\o\g\y\p\k\o\f\b\a\9\j\f\n\o\i\b\t\d\a\6\w\y\e\g\b\i\v\p\1\x\5\n\x\i\p\r\p\3\v\8\a\p\7\y\v\f\9\9\f\k\f\a\j\c\e\x\z\n\t\o\s\u\f\6\z\6\k\5\5\g\e\s\l\k\k\w\t\l\y\2\n\i\2\d\g\c\m\h\s\q\x\0\7\9\6\r\f\8\8\k\d\3\h\d\a\k\a\m\y\g\3\8\6\s\4\v\i\1\7 ]] 00:08:15.282 02:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:15.282 02:12:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:15.282 [2024-11-08 02:12:16.973216] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:15.282 [2024-11-08 02:12:16.973307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73305 ] 00:08:15.282 [2024-11-08 02:12:17.112853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.282 [2024-11-08 02:12:17.149876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.544 [2024-11-08 02:12:17.177731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.544  [2024-11-08T02:12:17.428Z] Copying: 512/512 [B] (average 500 kBps) 00:08:15.544 00:08:15.544 02:12:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qlqx9s6g3uksqrfzp56e8g511z0msc731pntm69p450901ttzy9ink1zo99or5frerlcihy6u8o0qlisy0xspnlifp5qzg2qpnmsoxml7t3asdiehodxcw7wyj25q4atq6ekxcme0ep5ppibxdc4tdfjha32ydikjs2z4zjho9m5jgjmr4h2gvqarg92p9vydp05vp7hgqfbjtsqz6u2apbsuvysdqtnzf7vk9zkrargdl7wv9wnn4tbbyjsr89q30qnmrxnbimhqoejl5wh06pvo7n6271urvhz7nt1djlpp1nkd17k1jlqugijs8jucrhjsigel2wgy4mlrdrmuk1k8ynzne2r5n1dbuf02zwgxthpmo7sd2u8iiq6oxivjot7h16xeogypkofba9jfnoibtda6wyegbivp1x5nxiprp3v8ap7yvf99fkfajcexzntosuf6z6k55geslkkwtly2ni2dgcmhsqx0796rf88kd3hdakamyg386s4vi17 == \q\l\q\x\9\s\6\g\3\u\k\s\q\r\f\z\p\5\6\e\8\g\5\1\1\z\0\m\s\c\7\3\1\p\n\t\m\6\9\p\4\5\0\9\0\1\t\t\z\y\9\i\n\k\1\z\o\9\9\o\r\5\f\r\e\r\l\c\i\h\y\6\u\8\o\0\q\l\i\s\y\0\x\s\p\n\l\i\f\p\5\q\z\g\2\q\p\n\m\s\o\x\m\l\7\t\3\a\s\d\i\e\h\o\d\x\c\w\7\w\y\j\2\5\q\4\a\t\q\6\e\k\x\c\m\e\0\e\p\5\p\p\i\b\x\d\c\4\t\d\f\j\h\a\3\2\y\d\i\k\j\s\2\z\4\z\j\h\o\9\m\5\j\g\j\m\r\4\h\2\g\v\q\a\r\g\9\2\p\9\v\y\d\p\0\5\v\p\7\h\g\q\f\b\j\t\s\q\z\6\u\2\a\p\b\s\u\v\y\s\d\q\t\n\z\f\7\v\k\9\z\k\r\a\r\g\d\l\7\w\v\9\w\n\n\4\t\b\b\y\j\s\r\8\9\q\3\0\q\n\m\r\x\n\b\i\m\h\q\o\e\j\l\5\w\h\0\6\p\v\o\7\n\6\2\7\1\u\r\v\h\z\7\n\t\1\d\j\l\p\p\1\n\k\d\1\7\k\1\j\l\q\u\g\i\j\s\8\j\u\c\r\h\j\s\i\g\e\l\2\w\g\y\4\m\l\r\d\r\m\u\k\1\k\8\y\n\z\n\e\2\r\5\n\1\d\b\u\f\0\2\z\w\g\x\t\h\p\m\o\7\s\d\2\u\8\i\i\q\6\o\x\i\v\j\o\t\7\h\1\6\x\e\o\g\y\p\k\o\f\b\a\9\j\f\n\o\i\b\t\d\a\6\w\y\e\g\b\i\v\p\1\x\5\n\x\i\p\r\p\3\v\8\a\p\7\y\v\f\9\9\f\k\f\a\j\c\e\x\z\n\t\o\s\u\f\6\z\6\k\5\5\g\e\s\l\k\k\w\t\l\y\2\n\i\2\d\g\c\m\h\s\q\x\0\7\9\6\r\f\8\8\k\d\3\h\d\a\k\a\m\y\g\3\8\6\s\4\v\i\1\7 ]] 00:08:15.544 02:12:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:15.544 02:12:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:15.544 [2024-11-08 02:12:17.356883] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:15.544 [2024-11-08 02:12:17.356967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73320 ] 00:08:15.807 [2024-11-08 02:12:17.493041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.807 [2024-11-08 02:12:17.537430] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.807 [2024-11-08 02:12:17.571667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.807  [2024-11-08T02:12:17.951Z] Copying: 512/512 [B] (average 166 kBps) 00:08:16.067 00:08:16.067 02:12:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qlqx9s6g3uksqrfzp56e8g511z0msc731pntm69p450901ttzy9ink1zo99or5frerlcihy6u8o0qlisy0xspnlifp5qzg2qpnmsoxml7t3asdiehodxcw7wyj25q4atq6ekxcme0ep5ppibxdc4tdfjha32ydikjs2z4zjho9m5jgjmr4h2gvqarg92p9vydp05vp7hgqfbjtsqz6u2apbsuvysdqtnzf7vk9zkrargdl7wv9wnn4tbbyjsr89q30qnmrxnbimhqoejl5wh06pvo7n6271urvhz7nt1djlpp1nkd17k1jlqugijs8jucrhjsigel2wgy4mlrdrmuk1k8ynzne2r5n1dbuf02zwgxthpmo7sd2u8iiq6oxivjot7h16xeogypkofba9jfnoibtda6wyegbivp1x5nxiprp3v8ap7yvf99fkfajcexzntosuf6z6k55geslkkwtly2ni2dgcmhsqx0796rf88kd3hdakamyg386s4vi17 == \q\l\q\x\9\s\6\g\3\u\k\s\q\r\f\z\p\5\6\e\8\g\5\1\1\z\0\m\s\c\7\3\1\p\n\t\m\6\9\p\4\5\0\9\0\1\t\t\z\y\9\i\n\k\1\z\o\9\9\o\r\5\f\r\e\r\l\c\i\h\y\6\u\8\o\0\q\l\i\s\y\0\x\s\p\n\l\i\f\p\5\q\z\g\2\q\p\n\m\s\o\x\m\l\7\t\3\a\s\d\i\e\h\o\d\x\c\w\7\w\y\j\2\5\q\4\a\t\q\6\e\k\x\c\m\e\0\e\p\5\p\p\i\b\x\d\c\4\t\d\f\j\h\a\3\2\y\d\i\k\j\s\2\z\4\z\j\h\o\9\m\5\j\g\j\m\r\4\h\2\g\v\q\a\r\g\9\2\p\9\v\y\d\p\0\5\v\p\7\h\g\q\f\b\j\t\s\q\z\6\u\2\a\p\b\s\u\v\y\s\d\q\t\n\z\f\7\v\k\9\z\k\r\a\r\g\d\l\7\w\v\9\w\n\n\4\t\b\b\y\j\s\r\8\9\q\3\0\q\n\m\r\x\n\b\i\m\h\q\o\e\j\l\5\w\h\0\6\p\v\o\7\n\6\2\7\1\u\r\v\h\z\7\n\t\1\d\j\l\p\p\1\n\k\d\1\7\k\1\j\l\q\u\g\i\j\s\8\j\u\c\r\h\j\s\i\g\e\l\2\w\g\y\4\m\l\r\d\r\m\u\k\1\k\8\y\n\z\n\e\2\r\5\n\1\d\b\u\f\0\2\z\w\g\x\t\h\p\m\o\7\s\d\2\u\8\i\i\q\6\o\x\i\v\j\o\t\7\h\1\6\x\e\o\g\y\p\k\o\f\b\a\9\j\f\n\o\i\b\t\d\a\6\w\y\e\g\b\i\v\p\1\x\5\n\x\i\p\r\p\3\v\8\a\p\7\y\v\f\9\9\f\k\f\a\j\c\e\x\z\n\t\o\s\u\f\6\z\6\k\5\5\g\e\s\l\k\k\w\t\l\y\2\n\i\2\d\g\c\m\h\s\q\x\0\7\9\6\r\f\8\8\k\d\3\h\d\a\k\a\m\y\g\3\8\6\s\4\v\i\1\7 ]] 00:08:16.067 02:12:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:16.067 02:12:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:16.067 [2024-11-08 02:12:17.783386] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:16.067 [2024-11-08 02:12:17.783529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73324 ] 00:08:16.067 [2024-11-08 02:12:17.923374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.326 [2024-11-08 02:12:17.957422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.326 [2024-11-08 02:12:17.985988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.326  [2024-11-08T02:12:18.210Z] Copying: 512/512 [B] (average 250 kBps) 00:08:16.326 00:08:16.326 02:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qlqx9s6g3uksqrfzp56e8g511z0msc731pntm69p450901ttzy9ink1zo99or5frerlcihy6u8o0qlisy0xspnlifp5qzg2qpnmsoxml7t3asdiehodxcw7wyj25q4atq6ekxcme0ep5ppibxdc4tdfjha32ydikjs2z4zjho9m5jgjmr4h2gvqarg92p9vydp05vp7hgqfbjtsqz6u2apbsuvysdqtnzf7vk9zkrargdl7wv9wnn4tbbyjsr89q30qnmrxnbimhqoejl5wh06pvo7n6271urvhz7nt1djlpp1nkd17k1jlqugijs8jucrhjsigel2wgy4mlrdrmuk1k8ynzne2r5n1dbuf02zwgxthpmo7sd2u8iiq6oxivjot7h16xeogypkofba9jfnoibtda6wyegbivp1x5nxiprp3v8ap7yvf99fkfajcexzntosuf6z6k55geslkkwtly2ni2dgcmhsqx0796rf88kd3hdakamyg386s4vi17 == \q\l\q\x\9\s\6\g\3\u\k\s\q\r\f\z\p\5\6\e\8\g\5\1\1\z\0\m\s\c\7\3\1\p\n\t\m\6\9\p\4\5\0\9\0\1\t\t\z\y\9\i\n\k\1\z\o\9\9\o\r\5\f\r\e\r\l\c\i\h\y\6\u\8\o\0\q\l\i\s\y\0\x\s\p\n\l\i\f\p\5\q\z\g\2\q\p\n\m\s\o\x\m\l\7\t\3\a\s\d\i\e\h\o\d\x\c\w\7\w\y\j\2\5\q\4\a\t\q\6\e\k\x\c\m\e\0\e\p\5\p\p\i\b\x\d\c\4\t\d\f\j\h\a\3\2\y\d\i\k\j\s\2\z\4\z\j\h\o\9\m\5\j\g\j\m\r\4\h\2\g\v\q\a\r\g\9\2\p\9\v\y\d\p\0\5\v\p\7\h\g\q\f\b\j\t\s\q\z\6\u\2\a\p\b\s\u\v\y\s\d\q\t\n\z\f\7\v\k\9\z\k\r\a\r\g\d\l\7\w\v\9\w\n\n\4\t\b\b\y\j\s\r\8\9\q\3\0\q\n\m\r\x\n\b\i\m\h\q\o\e\j\l\5\w\h\0\6\p\v\o\7\n\6\2\7\1\u\r\v\h\z\7\n\t\1\d\j\l\p\p\1\n\k\d\1\7\k\1\j\l\q\u\g\i\j\s\8\j\u\c\r\h\j\s\i\g\e\l\2\w\g\y\4\m\l\r\d\r\m\u\k\1\k\8\y\n\z\n\e\2\r\5\n\1\d\b\u\f\0\2\z\w\g\x\t\h\p\m\o\7\s\d\2\u\8\i\i\q\6\o\x\i\v\j\o\t\7\h\1\6\x\e\o\g\y\p\k\o\f\b\a\9\j\f\n\o\i\b\t\d\a\6\w\y\e\g\b\i\v\p\1\x\5\n\x\i\p\r\p\3\v\8\a\p\7\y\v\f\9\9\f\k\f\a\j\c\e\x\z\n\t\o\s\u\f\6\z\6\k\5\5\g\e\s\l\k\k\w\t\l\y\2\n\i\2\d\g\c\m\h\s\q\x\0\7\9\6\r\f\8\8\k\d\3\h\d\a\k\a\m\y\g\3\8\6\s\4\v\i\1\7 ]] 00:08:16.326 02:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:16.326 02:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:16.326 02:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:16.326 02:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:16.326 02:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:16.326 02:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:16.326 [2024-11-08 02:12:18.191919] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:16.326 [2024-11-08 02:12:18.192188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73333 ] 00:08:16.586 [2024-11-08 02:12:18.329651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.586 [2024-11-08 02:12:18.363317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.586 [2024-11-08 02:12:18.390150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.586  [2024-11-08T02:12:18.728Z] Copying: 512/512 [B] (average 500 kBps) 00:08:16.844 00:08:16.844 02:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fsvr6d9t174qacjiqvowemm7240fs5oqf16bvxv0l170ylqetba5ax0ro6eob6ibgix31ky5qydk7bp0ghkcdursk74eb17zy3t5kvjmeucv4gmp4xi7azn30n5nf2mifs1lj6fjmln8l5wu78xjmum53h67lvzsv86y3xmsc7poftn3km3iuy2rvdjfzqous9nelfilfysdllng1mrpntxialqffu7imjw2viac5lgbonj17lf3au0h7bfsspalmsavq3tcx8qxo3f6rs5aky1i3ooi1hw0ab9nbzddrejzc880i0yds7uarqp1mv47igzjejg8ndnzntgof8ygwetev7dfjbyj4v68l8sdd0ot46i73mp5wn7w7rtw41jz3t0ylg3d03bq8zbq7w3xjf2bh6z13dc29of8nanbhdt5wsvci4pwq9ow5499u5ynzomconpa6u0w0m29wacnrel8lls7tmwn4m53pfqcxqdctu0ca2jagmptg60792z5 == \f\s\v\r\6\d\9\t\1\7\4\q\a\c\j\i\q\v\o\w\e\m\m\7\2\4\0\f\s\5\o\q\f\1\6\b\v\x\v\0\l\1\7\0\y\l\q\e\t\b\a\5\a\x\0\r\o\6\e\o\b\6\i\b\g\i\x\3\1\k\y\5\q\y\d\k\7\b\p\0\g\h\k\c\d\u\r\s\k\7\4\e\b\1\7\z\y\3\t\5\k\v\j\m\e\u\c\v\4\g\m\p\4\x\i\7\a\z\n\3\0\n\5\n\f\2\m\i\f\s\1\l\j\6\f\j\m\l\n\8\l\5\w\u\7\8\x\j\m\u\m\5\3\h\6\7\l\v\z\s\v\8\6\y\3\x\m\s\c\7\p\o\f\t\n\3\k\m\3\i\u\y\2\r\v\d\j\f\z\q\o\u\s\9\n\e\l\f\i\l\f\y\s\d\l\l\n\g\1\m\r\p\n\t\x\i\a\l\q\f\f\u\7\i\m\j\w\2\v\i\a\c\5\l\g\b\o\n\j\1\7\l\f\3\a\u\0\h\7\b\f\s\s\p\a\l\m\s\a\v\q\3\t\c\x\8\q\x\o\3\f\6\r\s\5\a\k\y\1\i\3\o\o\i\1\h\w\0\a\b\9\n\b\z\d\d\r\e\j\z\c\8\8\0\i\0\y\d\s\7\u\a\r\q\p\1\m\v\4\7\i\g\z\j\e\j\g\8\n\d\n\z\n\t\g\o\f\8\y\g\w\e\t\e\v\7\d\f\j\b\y\j\4\v\6\8\l\8\s\d\d\0\o\t\4\6\i\7\3\m\p\5\w\n\7\w\7\r\t\w\4\1\j\z\3\t\0\y\l\g\3\d\0\3\b\q\8\z\b\q\7\w\3\x\j\f\2\b\h\6\z\1\3\d\c\2\9\o\f\8\n\a\n\b\h\d\t\5\w\s\v\c\i\4\p\w\q\9\o\w\5\4\9\9\u\5\y\n\z\o\m\c\o\n\p\a\6\u\0\w\0\m\2\9\w\a\c\n\r\e\l\8\l\l\s\7\t\m\w\n\4\m\5\3\p\f\q\c\x\q\d\c\t\u\0\c\a\2\j\a\g\m\p\t\g\6\0\7\9\2\z\5 ]] 00:08:16.844 02:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:16.844 02:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:16.844 [2024-11-08 02:12:18.604269] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:16.844 [2024-11-08 02:12:18.604360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73343 ] 00:08:17.103 [2024-11-08 02:12:18.742253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.103 [2024-11-08 02:12:18.773959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.103 [2024-11-08 02:12:18.801287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.103  [2024-11-08T02:12:18.987Z] Copying: 512/512 [B] (average 500 kBps) 00:08:17.103 00:08:17.103 02:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fsvr6d9t174qacjiqvowemm7240fs5oqf16bvxv0l170ylqetba5ax0ro6eob6ibgix31ky5qydk7bp0ghkcdursk74eb17zy3t5kvjmeucv4gmp4xi7azn30n5nf2mifs1lj6fjmln8l5wu78xjmum53h67lvzsv86y3xmsc7poftn3km3iuy2rvdjfzqous9nelfilfysdllng1mrpntxialqffu7imjw2viac5lgbonj17lf3au0h7bfsspalmsavq3tcx8qxo3f6rs5aky1i3ooi1hw0ab9nbzddrejzc880i0yds7uarqp1mv47igzjejg8ndnzntgof8ygwetev7dfjbyj4v68l8sdd0ot46i73mp5wn7w7rtw41jz3t0ylg3d03bq8zbq7w3xjf2bh6z13dc29of8nanbhdt5wsvci4pwq9ow5499u5ynzomconpa6u0w0m29wacnrel8lls7tmwn4m53pfqcxqdctu0ca2jagmptg60792z5 == \f\s\v\r\6\d\9\t\1\7\4\q\a\c\j\i\q\v\o\w\e\m\m\7\2\4\0\f\s\5\o\q\f\1\6\b\v\x\v\0\l\1\7\0\y\l\q\e\t\b\a\5\a\x\0\r\o\6\e\o\b\6\i\b\g\i\x\3\1\k\y\5\q\y\d\k\7\b\p\0\g\h\k\c\d\u\r\s\k\7\4\e\b\1\7\z\y\3\t\5\k\v\j\m\e\u\c\v\4\g\m\p\4\x\i\7\a\z\n\3\0\n\5\n\f\2\m\i\f\s\1\l\j\6\f\j\m\l\n\8\l\5\w\u\7\8\x\j\m\u\m\5\3\h\6\7\l\v\z\s\v\8\6\y\3\x\m\s\c\7\p\o\f\t\n\3\k\m\3\i\u\y\2\r\v\d\j\f\z\q\o\u\s\9\n\e\l\f\i\l\f\y\s\d\l\l\n\g\1\m\r\p\n\t\x\i\a\l\q\f\f\u\7\i\m\j\w\2\v\i\a\c\5\l\g\b\o\n\j\1\7\l\f\3\a\u\0\h\7\b\f\s\s\p\a\l\m\s\a\v\q\3\t\c\x\8\q\x\o\3\f\6\r\s\5\a\k\y\1\i\3\o\o\i\1\h\w\0\a\b\9\n\b\z\d\d\r\e\j\z\c\8\8\0\i\0\y\d\s\7\u\a\r\q\p\1\m\v\4\7\i\g\z\j\e\j\g\8\n\d\n\z\n\t\g\o\f\8\y\g\w\e\t\e\v\7\d\f\j\b\y\j\4\v\6\8\l\8\s\d\d\0\o\t\4\6\i\7\3\m\p\5\w\n\7\w\7\r\t\w\4\1\j\z\3\t\0\y\l\g\3\d\0\3\b\q\8\z\b\q\7\w\3\x\j\f\2\b\h\6\z\1\3\d\c\2\9\o\f\8\n\a\n\b\h\d\t\5\w\s\v\c\i\4\p\w\q\9\o\w\5\4\9\9\u\5\y\n\z\o\m\c\o\n\p\a\6\u\0\w\0\m\2\9\w\a\c\n\r\e\l\8\l\l\s\7\t\m\w\n\4\m\5\3\p\f\q\c\x\q\d\c\t\u\0\c\a\2\j\a\g\m\p\t\g\6\0\7\9\2\z\5 ]] 00:08:17.103 02:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:17.103 02:12:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:17.362 [2024-11-08 02:12:19.010805] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:17.363 [2024-11-08 02:12:19.011056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73347 ] 00:08:17.363 [2024-11-08 02:12:19.146638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.363 [2024-11-08 02:12:19.183630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.363 [2024-11-08 02:12:19.211007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.363  [2024-11-08T02:12:19.506Z] Copying: 512/512 [B] (average 250 kBps) 00:08:17.622 00:08:17.622 02:12:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fsvr6d9t174qacjiqvowemm7240fs5oqf16bvxv0l170ylqetba5ax0ro6eob6ibgix31ky5qydk7bp0ghkcdursk74eb17zy3t5kvjmeucv4gmp4xi7azn30n5nf2mifs1lj6fjmln8l5wu78xjmum53h67lvzsv86y3xmsc7poftn3km3iuy2rvdjfzqous9nelfilfysdllng1mrpntxialqffu7imjw2viac5lgbonj17lf3au0h7bfsspalmsavq3tcx8qxo3f6rs5aky1i3ooi1hw0ab9nbzddrejzc880i0yds7uarqp1mv47igzjejg8ndnzntgof8ygwetev7dfjbyj4v68l8sdd0ot46i73mp5wn7w7rtw41jz3t0ylg3d03bq8zbq7w3xjf2bh6z13dc29of8nanbhdt5wsvci4pwq9ow5499u5ynzomconpa6u0w0m29wacnrel8lls7tmwn4m53pfqcxqdctu0ca2jagmptg60792z5 == \f\s\v\r\6\d\9\t\1\7\4\q\a\c\j\i\q\v\o\w\e\m\m\7\2\4\0\f\s\5\o\q\f\1\6\b\v\x\v\0\l\1\7\0\y\l\q\e\t\b\a\5\a\x\0\r\o\6\e\o\b\6\i\b\g\i\x\3\1\k\y\5\q\y\d\k\7\b\p\0\g\h\k\c\d\u\r\s\k\7\4\e\b\1\7\z\y\3\t\5\k\v\j\m\e\u\c\v\4\g\m\p\4\x\i\7\a\z\n\3\0\n\5\n\f\2\m\i\f\s\1\l\j\6\f\j\m\l\n\8\l\5\w\u\7\8\x\j\m\u\m\5\3\h\6\7\l\v\z\s\v\8\6\y\3\x\m\s\c\7\p\o\f\t\n\3\k\m\3\i\u\y\2\r\v\d\j\f\z\q\o\u\s\9\n\e\l\f\i\l\f\y\s\d\l\l\n\g\1\m\r\p\n\t\x\i\a\l\q\f\f\u\7\i\m\j\w\2\v\i\a\c\5\l\g\b\o\n\j\1\7\l\f\3\a\u\0\h\7\b\f\s\s\p\a\l\m\s\a\v\q\3\t\c\x\8\q\x\o\3\f\6\r\s\5\a\k\y\1\i\3\o\o\i\1\h\w\0\a\b\9\n\b\z\d\d\r\e\j\z\c\8\8\0\i\0\y\d\s\7\u\a\r\q\p\1\m\v\4\7\i\g\z\j\e\j\g\8\n\d\n\z\n\t\g\o\f\8\y\g\w\e\t\e\v\7\d\f\j\b\y\j\4\v\6\8\l\8\s\d\d\0\o\t\4\6\i\7\3\m\p\5\w\n\7\w\7\r\t\w\4\1\j\z\3\t\0\y\l\g\3\d\0\3\b\q\8\z\b\q\7\w\3\x\j\f\2\b\h\6\z\1\3\d\c\2\9\o\f\8\n\a\n\b\h\d\t\5\w\s\v\c\i\4\p\w\q\9\o\w\5\4\9\9\u\5\y\n\z\o\m\c\o\n\p\a\6\u\0\w\0\m\2\9\w\a\c\n\r\e\l\8\l\l\s\7\t\m\w\n\4\m\5\3\p\f\q\c\x\q\d\c\t\u\0\c\a\2\j\a\g\m\p\t\g\6\0\7\9\2\z\5 ]] 00:08:17.622 02:12:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:17.622 02:12:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:17.622 [2024-11-08 02:12:19.407935] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:17.622 [2024-11-08 02:12:19.408028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73362 ] 00:08:17.881 [2024-11-08 02:12:19.544784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.881 [2024-11-08 02:12:19.576602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.881 [2024-11-08 02:12:19.603566] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.881  [2024-11-08T02:12:19.765Z] Copying: 512/512 [B] (average 250 kBps) 00:08:17.881 00:08:17.881 02:12:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fsvr6d9t174qacjiqvowemm7240fs5oqf16bvxv0l170ylqetba5ax0ro6eob6ibgix31ky5qydk7bp0ghkcdursk74eb17zy3t5kvjmeucv4gmp4xi7azn30n5nf2mifs1lj6fjmln8l5wu78xjmum53h67lvzsv86y3xmsc7poftn3km3iuy2rvdjfzqous9nelfilfysdllng1mrpntxialqffu7imjw2viac5lgbonj17lf3au0h7bfsspalmsavq3tcx8qxo3f6rs5aky1i3ooi1hw0ab9nbzddrejzc880i0yds7uarqp1mv47igzjejg8ndnzntgof8ygwetev7dfjbyj4v68l8sdd0ot46i73mp5wn7w7rtw41jz3t0ylg3d03bq8zbq7w3xjf2bh6z13dc29of8nanbhdt5wsvci4pwq9ow5499u5ynzomconpa6u0w0m29wacnrel8lls7tmwn4m53pfqcxqdctu0ca2jagmptg60792z5 == \f\s\v\r\6\d\9\t\1\7\4\q\a\c\j\i\q\v\o\w\e\m\m\7\2\4\0\f\s\5\o\q\f\1\6\b\v\x\v\0\l\1\7\0\y\l\q\e\t\b\a\5\a\x\0\r\o\6\e\o\b\6\i\b\g\i\x\3\1\k\y\5\q\y\d\k\7\b\p\0\g\h\k\c\d\u\r\s\k\7\4\e\b\1\7\z\y\3\t\5\k\v\j\m\e\u\c\v\4\g\m\p\4\x\i\7\a\z\n\3\0\n\5\n\f\2\m\i\f\s\1\l\j\6\f\j\m\l\n\8\l\5\w\u\7\8\x\j\m\u\m\5\3\h\6\7\l\v\z\s\v\8\6\y\3\x\m\s\c\7\p\o\f\t\n\3\k\m\3\i\u\y\2\r\v\d\j\f\z\q\o\u\s\9\n\e\l\f\i\l\f\y\s\d\l\l\n\g\1\m\r\p\n\t\x\i\a\l\q\f\f\u\7\i\m\j\w\2\v\i\a\c\5\l\g\b\o\n\j\1\7\l\f\3\a\u\0\h\7\b\f\s\s\p\a\l\m\s\a\v\q\3\t\c\x\8\q\x\o\3\f\6\r\s\5\a\k\y\1\i\3\o\o\i\1\h\w\0\a\b\9\n\b\z\d\d\r\e\j\z\c\8\8\0\i\0\y\d\s\7\u\a\r\q\p\1\m\v\4\7\i\g\z\j\e\j\g\8\n\d\n\z\n\t\g\o\f\8\y\g\w\e\t\e\v\7\d\f\j\b\y\j\4\v\6\8\l\8\s\d\d\0\o\t\4\6\i\7\3\m\p\5\w\n\7\w\7\r\t\w\4\1\j\z\3\t\0\y\l\g\3\d\0\3\b\q\8\z\b\q\7\w\3\x\j\f\2\b\h\6\z\1\3\d\c\2\9\o\f\8\n\a\n\b\h\d\t\5\w\s\v\c\i\4\p\w\q\9\o\w\5\4\9\9\u\5\y\n\z\o\m\c\o\n\p\a\6\u\0\w\0\m\2\9\w\a\c\n\r\e\l\8\l\l\s\7\t\m\w\n\4\m\5\3\p\f\q\c\x\q\d\c\t\u\0\c\a\2\j\a\g\m\p\t\g\6\0\7\9\2\z\5 ]] 00:08:17.881 00:08:17.881 real 0m3.232s 00:08:17.881 user 0m1.613s 00:08:17.881 sys 0m1.408s 00:08:17.881 02:12:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.881 02:12:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:17.881 ************************************ 00:08:17.881 END TEST dd_flags_misc 00:08:17.881 ************************************ 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:18.141 * Second test run, disabling liburing, forcing AIO 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:18.141 ************************************ 00:08:18.141 START TEST dd_flag_append_forced_aio 00:08:18.141 ************************************ 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=81gbo9x8surn2c7oxzstpasaimvp1353 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=b27z4g6jk6gbx7097nop866xtbcuutnd 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 81gbo9x8surn2c7oxzstpasaimvp1353 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s b27z4g6jk6gbx7097nop866xtbcuutnd 00:08:18.141 02:12:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:18.141 [2024-11-08 02:12:19.843806] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:18.141 [2024-11-08 02:12:19.843890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73385 ] 00:08:18.141 [2024-11-08 02:12:19.973995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.141 [2024-11-08 02:12:20.006863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.400 [2024-11-08 02:12:20.036085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.400  [2024-11-08T02:12:20.284Z] Copying: 32/32 [B] (average 31 kBps) 00:08:18.400 00:08:18.400 ************************************ 00:08:18.400 END TEST dd_flag_append_forced_aio 00:08:18.400 ************************************ 00:08:18.400 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ b27z4g6jk6gbx7097nop866xtbcuutnd81gbo9x8surn2c7oxzstpasaimvp1353 == \b\2\7\z\4\g\6\j\k\6\g\b\x\7\0\9\7\n\o\p\8\6\6\x\t\b\c\u\u\t\n\d\8\1\g\b\o\9\x\8\s\u\r\n\2\c\7\o\x\z\s\t\p\a\s\a\i\m\v\p\1\3\5\3 ]] 00:08:18.400 00:08:18.400 real 0m0.397s 00:08:18.400 user 0m0.186s 00:08:18.400 sys 0m0.093s 00:08:18.400 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.400 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:18.400 02:12:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:18.400 02:12:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:18.401 ************************************ 00:08:18.401 START TEST dd_flag_directory_forced_aio 00:08:18.401 ************************************ 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.401 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.662 [2024-11-08 02:12:20.294696] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:18.662 [2024-11-08 02:12:20.294784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73411 ] 00:08:18.662 [2024-11-08 02:12:20.423791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.662 [2024-11-08 02:12:20.456652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.662 [2024-11-08 02:12:20.483254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.662 [2024-11-08 02:12:20.497898] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:18.662 [2024-11-08 02:12:20.497950] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:18.662 [2024-11-08 02:12:20.497977] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.921 [2024-11-08 02:12:20.558862] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.921 02:12:20 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:18.921 [2024-11-08 02:12:20.682460] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:18.921 [2024-11-08 02:12:20.682927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73421 ] 00:08:19.180 [2024-11-08 02:12:20.822410] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.180 [2024-11-08 02:12:20.855806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.180 [2024-11-08 02:12:20.882892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.180 [2024-11-08 02:12:20.897872] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:19.180 [2024-11-08 02:12:20.898220] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:19.180 [2024-11-08 02:12:20.898239] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.180 [2024-11-08 02:12:20.956204] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:19.180 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:08:19.180 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.180 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:08:19.180 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:19.180 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:19.180 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.180 00:08:19.180 real 0m0.778s 00:08:19.180 user 0m0.381s 00:08:19.180 sys 0m0.185s 00:08:19.180 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.180 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:19.180 ************************************ 00:08:19.180 END TEST dd_flag_directory_forced_aio 00:08:19.180 ************************************ 00:08:19.180 02:12:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:19.180 02:12:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.180 02:12:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.180 02:12:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:19.440 ************************************ 00:08:19.440 START TEST dd_flag_nofollow_forced_aio 00:08:19.440 ************************************ 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.440 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.440 [2024-11-08 02:12:21.136677] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:19.440 [2024-11-08 02:12:21.136772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73444 ] 00:08:19.440 [2024-11-08 02:12:21.276558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.440 [2024-11-08 02:12:21.308830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.699 [2024-11-08 02:12:21.336766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.699 [2024-11-08 02:12:21.352997] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:19.699 [2024-11-08 02:12:21.353056] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:19.699 [2024-11-08 02:12:21.353086] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.699 [2024-11-08 02:12:21.413606] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.699 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.700 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.700 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.700 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.700 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:19.700 [2024-11-08 02:12:21.547534] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:19.700 [2024-11-08 02:12:21.547654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73459 ] 00:08:19.959 [2024-11-08 02:12:21.685636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.959 [2024-11-08 02:12:21.720639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.959 [2024-11-08 02:12:21.747781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.959 [2024-11-08 02:12:21.762613] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:19.959 [2024-11-08 02:12:21.762661] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:19.959 [2024-11-08 02:12:21.762690] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.959 [2024-11-08 02:12:21.821055] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:20.219 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:08:20.219 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:20.219 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:08:20.219 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:20.219 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:20.219 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:20.219 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:20.219 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:20.219 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:20.219 02:12:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.219 [2024-11-08 02:12:21.949913] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:20.219 [2024-11-08 02:12:21.950012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73461 ] 00:08:20.219 [2024-11-08 02:12:22.080960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.479 [2024-11-08 02:12:22.114325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.479 [2024-11-08 02:12:22.141108] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.479  [2024-11-08T02:12:22.363Z] Copying: 512/512 [B] (average 500 kBps) 00:08:20.479 00:08:20.479 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ lxvmcscpeg2ketgcvv3imvtl6xu9nn7dshc2ou79lk7h40kw4q5vsfknmu3agyghork6pvqiurhzowrztwim9waupfirxtiq7z7hss4na6xltbgmqbukgi9p6ud1ubzji4sti5e1msv6cyy1ouhhtk37x77qh7z4tmeuxj4h71uksgadea904ry7h8tgw87zvwgovyqy3f6aqyc1ufg5scx1e4bnt34t5bkxsvlsas6a2bqujpuvqqspt0aj2azb4smgsy6aq0057t6v0hr0831e79l66y2cbfxpd2l3po3481jalwqj6rybgk7hieqa2urns0z6epp6emwltwva8999syp95d8c5o4rdt4bxcea5wyfy6d912jscfao1ygwr6bl5cgu8an1a9jeikolohl7hv55v8wqtboj4yqqeafrov6mfqzdm33qx9hhetefi385ilt14jc6237rpvqsx5asryfelvrj9pi7574fw3f6g6avqag2kem9nes8ind9 == \l\x\v\m\c\s\c\p\e\g\2\k\e\t\g\c\v\v\3\i\m\v\t\l\6\x\u\9\n\n\7\d\s\h\c\2\o\u\7\9\l\k\7\h\4\0\k\w\4\q\5\v\s\f\k\n\m\u\3\a\g\y\g\h\o\r\k\6\p\v\q\i\u\r\h\z\o\w\r\z\t\w\i\m\9\w\a\u\p\f\i\r\x\t\i\q\7\z\7\h\s\s\4\n\a\6\x\l\t\b\g\m\q\b\u\k\g\i\9\p\6\u\d\1\u\b\z\j\i\4\s\t\i\5\e\1\m\s\v\6\c\y\y\1\o\u\h\h\t\k\3\7\x\7\7\q\h\7\z\4\t\m\e\u\x\j\4\h\7\1\u\k\s\g\a\d\e\a\9\0\4\r\y\7\h\8\t\g\w\8\7\z\v\w\g\o\v\y\q\y\3\f\6\a\q\y\c\1\u\f\g\5\s\c\x\1\e\4\b\n\t\3\4\t\5\b\k\x\s\v\l\s\a\s\6\a\2\b\q\u\j\p\u\v\q\q\s\p\t\0\a\j\2\a\z\b\4\s\m\g\s\y\6\a\q\0\0\5\7\t\6\v\0\h\r\0\8\3\1\e\7\9\l\6\6\y\2\c\b\f\x\p\d\2\l\3\p\o\3\4\8\1\j\a\l\w\q\j\6\r\y\b\g\k\7\h\i\e\q\a\2\u\r\n\s\0\z\6\e\p\p\6\e\m\w\l\t\w\v\a\8\9\9\9\s\y\p\9\5\d\8\c\5\o\4\r\d\t\4\b\x\c\e\a\5\w\y\f\y\6\d\9\1\2\j\s\c\f\a\o\1\y\g\w\r\6\b\l\5\c\g\u\8\a\n\1\a\9\j\e\i\k\o\l\o\h\l\7\h\v\5\5\v\8\w\q\t\b\o\j\4\y\q\q\e\a\f\r\o\v\6\m\f\q\z\d\m\3\3\q\x\9\h\h\e\t\e\f\i\3\8\5\i\l\t\1\4\j\c\6\2\3\7\r\p\v\q\s\x\5\a\s\r\y\f\e\l\v\r\j\9\p\i\7\5\7\4\f\w\3\f\6\g\6\a\v\q\a\g\2\k\e\m\9\n\e\s\8\i\n\d\9 ]] 00:08:20.479 00:08:20.479 real 0m1.234s 00:08:20.479 user 0m0.616s 00:08:20.479 sys 0m0.288s 00:08:20.479 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.479 ************************************ 00:08:20.479 END TEST dd_flag_nofollow_forced_aio 00:08:20.479 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:20.479 ************************************ 00:08:20.479 02:12:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:20.479 02:12:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.479 02:12:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.479 02:12:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:20.479 ************************************ 00:08:20.479 START TEST dd_flag_noatime_forced_aio 00:08:20.479 ************************************ 00:08:20.479 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:08:20.479 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:20.479 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:20.479 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:20.479 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:20.479 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:20.738 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:20.738 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731031942 00:08:20.738 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.738 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731031942 00:08:20.738 02:12:22 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:21.673 02:12:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.674 [2024-11-08 02:12:23.427746] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:21.674 [2024-11-08 02:12:23.427847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73501 ] 00:08:21.932 [2024-11-08 02:12:23.557880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.932 [2024-11-08 02:12:23.590637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.932 [2024-11-08 02:12:23.617976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.932  [2024-11-08T02:12:23.816Z] Copying: 512/512 [B] (average 500 kBps) 00:08:21.932 00:08:21.932 02:12:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:21.932 02:12:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731031942 )) 00:08:21.932 02:12:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:21.932 02:12:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731031942 )) 00:08:21.932 02:12:23 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.191 [2024-11-08 02:12:23.837712] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:22.191 [2024-11-08 02:12:23.837825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73513 ] 00:08:22.191 [2024-11-08 02:12:23.962901] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.191 [2024-11-08 02:12:23.995735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.191 [2024-11-08 02:12:24.022324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.191  [2024-11-08T02:12:24.334Z] Copying: 512/512 [B] (average 500 kBps) 00:08:22.450 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731031944 )) 00:08:22.450 00:08:22.450 real 0m1.835s 00:08:22.450 user 0m0.390s 00:08:22.450 sys 0m0.200s 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:22.450 ************************************ 00:08:22.450 END TEST dd_flag_noatime_forced_aio 00:08:22.450 ************************************ 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:22.450 ************************************ 00:08:22.450 START TEST dd_flags_misc_forced_aio 00:08:22.450 ************************************ 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:22.450 02:12:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:22.451 02:12:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:22.451 [2024-11-08 02:12:24.310749] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:22.451 [2024-11-08 02:12:24.310838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73534 ] 00:08:22.710 [2024-11-08 02:12:24.447238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.710 [2024-11-08 02:12:24.479087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.710 [2024-11-08 02:12:24.505698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.710  [2024-11-08T02:12:24.853Z] Copying: 512/512 [B] (average 500 kBps) 00:08:22.969 00:08:22.969 02:12:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vyqk0wrotgi9itc495b42aocns0mrx8ku7ei6siheq0pdfm1gedc8dxc2ybaas5m10d1qg2gsn2s3dcuctn01g10egh7ldjf6s2a98g7cbdudnvzbbehjxj8mcyw6i0u44uomjz6vr56m9lhuzlcuc6xjvfmwq0tcwgassjn6tqcy36u5zo4mm8s8qcffurk5qsiq9avwqhlrhn34q0xbqvi7nap2juw6hqxj4hcr9v0lo0u4t2ot1g0j94tkc3tb0ul31ld1b6ppjqb441xdz3hv5nsz9ia4cmrd9piun4okuisjwsfeo5y31t8u9awoezwcwibf3h1hbsiq13m3m51icgle1tx3u3dw97l9n9soh73of2zfrgjvmn1g6dhom6axa0w9wevl89g47hs9u512ir2w4c6mb0zycgl9d1d0zxky358oaodvy50ydtf0xkemjtgmpssoxf6henucg4dohnz5m4n8je6nvfm913egvi6mctzowogg03vkw7s == \v\y\q\k\0\w\r\o\t\g\i\9\i\t\c\4\9\5\b\4\2\a\o\c\n\s\0\m\r\x\8\k\u\7\e\i\6\s\i\h\e\q\0\p\d\f\m\1\g\e\d\c\8\d\x\c\2\y\b\a\a\s\5\m\1\0\d\1\q\g\2\g\s\n\2\s\3\d\c\u\c\t\n\0\1\g\1\0\e\g\h\7\l\d\j\f\6\s\2\a\9\8\g\7\c\b\d\u\d\n\v\z\b\b\e\h\j\x\j\8\m\c\y\w\6\i\0\u\4\4\u\o\m\j\z\6\v\r\5\6\m\9\l\h\u\z\l\c\u\c\6\x\j\v\f\m\w\q\0\t\c\w\g\a\s\s\j\n\6\t\q\c\y\3\6\u\5\z\o\4\m\m\8\s\8\q\c\f\f\u\r\k\5\q\s\i\q\9\a\v\w\q\h\l\r\h\n\3\4\q\0\x\b\q\v\i\7\n\a\p\2\j\u\w\6\h\q\x\j\4\h\c\r\9\v\0\l\o\0\u\4\t\2\o\t\1\g\0\j\9\4\t\k\c\3\t\b\0\u\l\3\1\l\d\1\b\6\p\p\j\q\b\4\4\1\x\d\z\3\h\v\5\n\s\z\9\i\a\4\c\m\r\d\9\p\i\u\n\4\o\k\u\i\s\j\w\s\f\e\o\5\y\3\1\t\8\u\9\a\w\o\e\z\w\c\w\i\b\f\3\h\1\h\b\s\i\q\1\3\m\3\m\5\1\i\c\g\l\e\1\t\x\3\u\3\d\w\9\7\l\9\n\9\s\o\h\7\3\o\f\2\z\f\r\g\j\v\m\n\1\g\6\d\h\o\m\6\a\x\a\0\w\9\w\e\v\l\8\9\g\4\7\h\s\9\u\5\1\2\i\r\2\w\4\c\6\m\b\0\z\y\c\g\l\9\d\1\d\0\z\x\k\y\3\5\8\o\a\o\d\v\y\5\0\y\d\t\f\0\x\k\e\m\j\t\g\m\p\s\s\o\x\f\6\h\e\n\u\c\g\4\d\o\h\n\z\5\m\4\n\8\j\e\6\n\v\f\m\9\1\3\e\g\v\i\6\m\c\t\z\o\w\o\g\g\0\3\v\k\w\7\s ]] 00:08:22.969 02:12:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:22.969 02:12:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:22.969 [2024-11-08 02:12:24.728389] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:22.969 [2024-11-08 02:12:24.728489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73547 ] 00:08:23.228 [2024-11-08 02:12:24.862384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.228 [2024-11-08 02:12:24.894505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.228 [2024-11-08 02:12:24.921684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.228  [2024-11-08T02:12:25.112Z] Copying: 512/512 [B] (average 500 kBps) 00:08:23.228 00:08:23.228 02:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vyqk0wrotgi9itc495b42aocns0mrx8ku7ei6siheq0pdfm1gedc8dxc2ybaas5m10d1qg2gsn2s3dcuctn01g10egh7ldjf6s2a98g7cbdudnvzbbehjxj8mcyw6i0u44uomjz6vr56m9lhuzlcuc6xjvfmwq0tcwgassjn6tqcy36u5zo4mm8s8qcffurk5qsiq9avwqhlrhn34q0xbqvi7nap2juw6hqxj4hcr9v0lo0u4t2ot1g0j94tkc3tb0ul31ld1b6ppjqb441xdz3hv5nsz9ia4cmrd9piun4okuisjwsfeo5y31t8u9awoezwcwibf3h1hbsiq13m3m51icgle1tx3u3dw97l9n9soh73of2zfrgjvmn1g6dhom6axa0w9wevl89g47hs9u512ir2w4c6mb0zycgl9d1d0zxky358oaodvy50ydtf0xkemjtgmpssoxf6henucg4dohnz5m4n8je6nvfm913egvi6mctzowogg03vkw7s == \v\y\q\k\0\w\r\o\t\g\i\9\i\t\c\4\9\5\b\4\2\a\o\c\n\s\0\m\r\x\8\k\u\7\e\i\6\s\i\h\e\q\0\p\d\f\m\1\g\e\d\c\8\d\x\c\2\y\b\a\a\s\5\m\1\0\d\1\q\g\2\g\s\n\2\s\3\d\c\u\c\t\n\0\1\g\1\0\e\g\h\7\l\d\j\f\6\s\2\a\9\8\g\7\c\b\d\u\d\n\v\z\b\b\e\h\j\x\j\8\m\c\y\w\6\i\0\u\4\4\u\o\m\j\z\6\v\r\5\6\m\9\l\h\u\z\l\c\u\c\6\x\j\v\f\m\w\q\0\t\c\w\g\a\s\s\j\n\6\t\q\c\y\3\6\u\5\z\o\4\m\m\8\s\8\q\c\f\f\u\r\k\5\q\s\i\q\9\a\v\w\q\h\l\r\h\n\3\4\q\0\x\b\q\v\i\7\n\a\p\2\j\u\w\6\h\q\x\j\4\h\c\r\9\v\0\l\o\0\u\4\t\2\o\t\1\g\0\j\9\4\t\k\c\3\t\b\0\u\l\3\1\l\d\1\b\6\p\p\j\q\b\4\4\1\x\d\z\3\h\v\5\n\s\z\9\i\a\4\c\m\r\d\9\p\i\u\n\4\o\k\u\i\s\j\w\s\f\e\o\5\y\3\1\t\8\u\9\a\w\o\e\z\w\c\w\i\b\f\3\h\1\h\b\s\i\q\1\3\m\3\m\5\1\i\c\g\l\e\1\t\x\3\u\3\d\w\9\7\l\9\n\9\s\o\h\7\3\o\f\2\z\f\r\g\j\v\m\n\1\g\6\d\h\o\m\6\a\x\a\0\w\9\w\e\v\l\8\9\g\4\7\h\s\9\u\5\1\2\i\r\2\w\4\c\6\m\b\0\z\y\c\g\l\9\d\1\d\0\z\x\k\y\3\5\8\o\a\o\d\v\y\5\0\y\d\t\f\0\x\k\e\m\j\t\g\m\p\s\s\o\x\f\6\h\e\n\u\c\g\4\d\o\h\n\z\5\m\4\n\8\j\e\6\n\v\f\m\9\1\3\e\g\v\i\6\m\c\t\z\o\w\o\g\g\0\3\v\k\w\7\s ]] 00:08:23.228 02:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.228 02:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:23.488 [2024-11-08 02:12:25.133794] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:23.488 [2024-11-08 02:12:25.133893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73549 ] 00:08:23.488 [2024-11-08 02:12:25.270246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.488 [2024-11-08 02:12:25.305575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.488 [2024-11-08 02:12:25.334055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.488  [2024-11-08T02:12:25.631Z] Copying: 512/512 [B] (average 166 kBps) 00:08:23.747 00:08:23.747 02:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vyqk0wrotgi9itc495b42aocns0mrx8ku7ei6siheq0pdfm1gedc8dxc2ybaas5m10d1qg2gsn2s3dcuctn01g10egh7ldjf6s2a98g7cbdudnvzbbehjxj8mcyw6i0u44uomjz6vr56m9lhuzlcuc6xjvfmwq0tcwgassjn6tqcy36u5zo4mm8s8qcffurk5qsiq9avwqhlrhn34q0xbqvi7nap2juw6hqxj4hcr9v0lo0u4t2ot1g0j94tkc3tb0ul31ld1b6ppjqb441xdz3hv5nsz9ia4cmrd9piun4okuisjwsfeo5y31t8u9awoezwcwibf3h1hbsiq13m3m51icgle1tx3u3dw97l9n9soh73of2zfrgjvmn1g6dhom6axa0w9wevl89g47hs9u512ir2w4c6mb0zycgl9d1d0zxky358oaodvy50ydtf0xkemjtgmpssoxf6henucg4dohnz5m4n8je6nvfm913egvi6mctzowogg03vkw7s == \v\y\q\k\0\w\r\o\t\g\i\9\i\t\c\4\9\5\b\4\2\a\o\c\n\s\0\m\r\x\8\k\u\7\e\i\6\s\i\h\e\q\0\p\d\f\m\1\g\e\d\c\8\d\x\c\2\y\b\a\a\s\5\m\1\0\d\1\q\g\2\g\s\n\2\s\3\d\c\u\c\t\n\0\1\g\1\0\e\g\h\7\l\d\j\f\6\s\2\a\9\8\g\7\c\b\d\u\d\n\v\z\b\b\e\h\j\x\j\8\m\c\y\w\6\i\0\u\4\4\u\o\m\j\z\6\v\r\5\6\m\9\l\h\u\z\l\c\u\c\6\x\j\v\f\m\w\q\0\t\c\w\g\a\s\s\j\n\6\t\q\c\y\3\6\u\5\z\o\4\m\m\8\s\8\q\c\f\f\u\r\k\5\q\s\i\q\9\a\v\w\q\h\l\r\h\n\3\4\q\0\x\b\q\v\i\7\n\a\p\2\j\u\w\6\h\q\x\j\4\h\c\r\9\v\0\l\o\0\u\4\t\2\o\t\1\g\0\j\9\4\t\k\c\3\t\b\0\u\l\3\1\l\d\1\b\6\p\p\j\q\b\4\4\1\x\d\z\3\h\v\5\n\s\z\9\i\a\4\c\m\r\d\9\p\i\u\n\4\o\k\u\i\s\j\w\s\f\e\o\5\y\3\1\t\8\u\9\a\w\o\e\z\w\c\w\i\b\f\3\h\1\h\b\s\i\q\1\3\m\3\m\5\1\i\c\g\l\e\1\t\x\3\u\3\d\w\9\7\l\9\n\9\s\o\h\7\3\o\f\2\z\f\r\g\j\v\m\n\1\g\6\d\h\o\m\6\a\x\a\0\w\9\w\e\v\l\8\9\g\4\7\h\s\9\u\5\1\2\i\r\2\w\4\c\6\m\b\0\z\y\c\g\l\9\d\1\d\0\z\x\k\y\3\5\8\o\a\o\d\v\y\5\0\y\d\t\f\0\x\k\e\m\j\t\g\m\p\s\s\o\x\f\6\h\e\n\u\c\g\4\d\o\h\n\z\5\m\4\n\8\j\e\6\n\v\f\m\9\1\3\e\g\v\i\6\m\c\t\z\o\w\o\g\g\0\3\v\k\w\7\s ]] 00:08:23.747 02:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:23.747 02:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:23.747 [2024-11-08 02:12:25.568976] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:23.747 [2024-11-08 02:12:25.569073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73562 ] 00:08:24.007 [2024-11-08 02:12:25.707392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.007 [2024-11-08 02:12:25.742329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.007 [2024-11-08 02:12:25.770368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.007  [2024-11-08T02:12:26.149Z] Copying: 512/512 [B] (average 500 kBps) 00:08:24.265 00:08:24.265 02:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ vyqk0wrotgi9itc495b42aocns0mrx8ku7ei6siheq0pdfm1gedc8dxc2ybaas5m10d1qg2gsn2s3dcuctn01g10egh7ldjf6s2a98g7cbdudnvzbbehjxj8mcyw6i0u44uomjz6vr56m9lhuzlcuc6xjvfmwq0tcwgassjn6tqcy36u5zo4mm8s8qcffurk5qsiq9avwqhlrhn34q0xbqvi7nap2juw6hqxj4hcr9v0lo0u4t2ot1g0j94tkc3tb0ul31ld1b6ppjqb441xdz3hv5nsz9ia4cmrd9piun4okuisjwsfeo5y31t8u9awoezwcwibf3h1hbsiq13m3m51icgle1tx3u3dw97l9n9soh73of2zfrgjvmn1g6dhom6axa0w9wevl89g47hs9u512ir2w4c6mb0zycgl9d1d0zxky358oaodvy50ydtf0xkemjtgmpssoxf6henucg4dohnz5m4n8je6nvfm913egvi6mctzowogg03vkw7s == \v\y\q\k\0\w\r\o\t\g\i\9\i\t\c\4\9\5\b\4\2\a\o\c\n\s\0\m\r\x\8\k\u\7\e\i\6\s\i\h\e\q\0\p\d\f\m\1\g\e\d\c\8\d\x\c\2\y\b\a\a\s\5\m\1\0\d\1\q\g\2\g\s\n\2\s\3\d\c\u\c\t\n\0\1\g\1\0\e\g\h\7\l\d\j\f\6\s\2\a\9\8\g\7\c\b\d\u\d\n\v\z\b\b\e\h\j\x\j\8\m\c\y\w\6\i\0\u\4\4\u\o\m\j\z\6\v\r\5\6\m\9\l\h\u\z\l\c\u\c\6\x\j\v\f\m\w\q\0\t\c\w\g\a\s\s\j\n\6\t\q\c\y\3\6\u\5\z\o\4\m\m\8\s\8\q\c\f\f\u\r\k\5\q\s\i\q\9\a\v\w\q\h\l\r\h\n\3\4\q\0\x\b\q\v\i\7\n\a\p\2\j\u\w\6\h\q\x\j\4\h\c\r\9\v\0\l\o\0\u\4\t\2\o\t\1\g\0\j\9\4\t\k\c\3\t\b\0\u\l\3\1\l\d\1\b\6\p\p\j\q\b\4\4\1\x\d\z\3\h\v\5\n\s\z\9\i\a\4\c\m\r\d\9\p\i\u\n\4\o\k\u\i\s\j\w\s\f\e\o\5\y\3\1\t\8\u\9\a\w\o\e\z\w\c\w\i\b\f\3\h\1\h\b\s\i\q\1\3\m\3\m\5\1\i\c\g\l\e\1\t\x\3\u\3\d\w\9\7\l\9\n\9\s\o\h\7\3\o\f\2\z\f\r\g\j\v\m\n\1\g\6\d\h\o\m\6\a\x\a\0\w\9\w\e\v\l\8\9\g\4\7\h\s\9\u\5\1\2\i\r\2\w\4\c\6\m\b\0\z\y\c\g\l\9\d\1\d\0\z\x\k\y\3\5\8\o\a\o\d\v\y\5\0\y\d\t\f\0\x\k\e\m\j\t\g\m\p\s\s\o\x\f\6\h\e\n\u\c\g\4\d\o\h\n\z\5\m\4\n\8\j\e\6\n\v\f\m\9\1\3\e\g\v\i\6\m\c\t\z\o\w\o\g\g\0\3\v\k\w\7\s ]] 00:08:24.265 02:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:24.265 02:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:24.265 02:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:24.265 02:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:24.265 02:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.265 02:12:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:24.265 [2024-11-08 02:12:26.012984] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:24.265 [2024-11-08 02:12:26.013080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73564 ] 00:08:24.524 [2024-11-08 02:12:26.150931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.524 [2024-11-08 02:12:26.182943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.524 [2024-11-08 02:12:26.209664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.524  [2024-11-08T02:12:26.408Z] Copying: 512/512 [B] (average 500 kBps) 00:08:24.524 00:08:24.524 02:12:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ q485v6gyr1crpbylg8sfyywxenkuucmp66h1ydtidzf0v45xuw5ms971ehm6wwzjwhmcz9c67j1x5qa81l6bln7snppe7x6gled0wqz1n7d1grirt2ijr42topast9py6cinb3c7mxk7hhjk845lc63ueoxabex4gzew7sz2k5wnla5pvahejij8i3ix1w8d59nd9x88ebzjgmkrrng352zj8xrmkf8l6n7v01e9huqvpuqx6jb8cz9nw52wk2yw6fsfpy4f9lbg597l3tfbph6ntw7caju9bv4dtruht1grg11glt9lq4sg0nkoidwkpz583tx6frhn4utv5pwgz1y1flncxs0o691skezesn2tuz83qpjxh1s78nofgduziyq35ke8w5ruwjg9wjkq82dwaroocujcs2uw38w85t60a7u40asagaqok5gxditzamsphk4ayy73khie5cpyt25xju2qqbzs2e0ppzt6gid1iul01wydvauy48gr89iu == \q\4\8\5\v\6\g\y\r\1\c\r\p\b\y\l\g\8\s\f\y\y\w\x\e\n\k\u\u\c\m\p\6\6\h\1\y\d\t\i\d\z\f\0\v\4\5\x\u\w\5\m\s\9\7\1\e\h\m\6\w\w\z\j\w\h\m\c\z\9\c\6\7\j\1\x\5\q\a\8\1\l\6\b\l\n\7\s\n\p\p\e\7\x\6\g\l\e\d\0\w\q\z\1\n\7\d\1\g\r\i\r\t\2\i\j\r\4\2\t\o\p\a\s\t\9\p\y\6\c\i\n\b\3\c\7\m\x\k\7\h\h\j\k\8\4\5\l\c\6\3\u\e\o\x\a\b\e\x\4\g\z\e\w\7\s\z\2\k\5\w\n\l\a\5\p\v\a\h\e\j\i\j\8\i\3\i\x\1\w\8\d\5\9\n\d\9\x\8\8\e\b\z\j\g\m\k\r\r\n\g\3\5\2\z\j\8\x\r\m\k\f\8\l\6\n\7\v\0\1\e\9\h\u\q\v\p\u\q\x\6\j\b\8\c\z\9\n\w\5\2\w\k\2\y\w\6\f\s\f\p\y\4\f\9\l\b\g\5\9\7\l\3\t\f\b\p\h\6\n\t\w\7\c\a\j\u\9\b\v\4\d\t\r\u\h\t\1\g\r\g\1\1\g\l\t\9\l\q\4\s\g\0\n\k\o\i\d\w\k\p\z\5\8\3\t\x\6\f\r\h\n\4\u\t\v\5\p\w\g\z\1\y\1\f\l\n\c\x\s\0\o\6\9\1\s\k\e\z\e\s\n\2\t\u\z\8\3\q\p\j\x\h\1\s\7\8\n\o\f\g\d\u\z\i\y\q\3\5\k\e\8\w\5\r\u\w\j\g\9\w\j\k\q\8\2\d\w\a\r\o\o\c\u\j\c\s\2\u\w\3\8\w\8\5\t\6\0\a\7\u\4\0\a\s\a\g\a\q\o\k\5\g\x\d\i\t\z\a\m\s\p\h\k\4\a\y\y\7\3\k\h\i\e\5\c\p\y\t\2\5\x\j\u\2\q\q\b\z\s\2\e\0\p\p\z\t\6\g\i\d\1\i\u\l\0\1\w\y\d\v\a\u\y\4\8\g\r\8\9\i\u ]] 00:08:24.524 02:12:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.524 02:12:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:24.783 [2024-11-08 02:12:26.437855] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:24.784 [2024-11-08 02:12:26.437962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73566 ] 00:08:24.784 [2024-11-08 02:12:26.576145] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.784 [2024-11-08 02:12:26.610588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.784 [2024-11-08 02:12:26.639285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.784  [2024-11-08T02:12:26.927Z] Copying: 512/512 [B] (average 500 kBps) 00:08:25.043 00:08:25.043 02:12:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ q485v6gyr1crpbylg8sfyywxenkuucmp66h1ydtidzf0v45xuw5ms971ehm6wwzjwhmcz9c67j1x5qa81l6bln7snppe7x6gled0wqz1n7d1grirt2ijr42topast9py6cinb3c7mxk7hhjk845lc63ueoxabex4gzew7sz2k5wnla5pvahejij8i3ix1w8d59nd9x88ebzjgmkrrng352zj8xrmkf8l6n7v01e9huqvpuqx6jb8cz9nw52wk2yw6fsfpy4f9lbg597l3tfbph6ntw7caju9bv4dtruht1grg11glt9lq4sg0nkoidwkpz583tx6frhn4utv5pwgz1y1flncxs0o691skezesn2tuz83qpjxh1s78nofgduziyq35ke8w5ruwjg9wjkq82dwaroocujcs2uw38w85t60a7u40asagaqok5gxditzamsphk4ayy73khie5cpyt25xju2qqbzs2e0ppzt6gid1iul01wydvauy48gr89iu == \q\4\8\5\v\6\g\y\r\1\c\r\p\b\y\l\g\8\s\f\y\y\w\x\e\n\k\u\u\c\m\p\6\6\h\1\y\d\t\i\d\z\f\0\v\4\5\x\u\w\5\m\s\9\7\1\e\h\m\6\w\w\z\j\w\h\m\c\z\9\c\6\7\j\1\x\5\q\a\8\1\l\6\b\l\n\7\s\n\p\p\e\7\x\6\g\l\e\d\0\w\q\z\1\n\7\d\1\g\r\i\r\t\2\i\j\r\4\2\t\o\p\a\s\t\9\p\y\6\c\i\n\b\3\c\7\m\x\k\7\h\h\j\k\8\4\5\l\c\6\3\u\e\o\x\a\b\e\x\4\g\z\e\w\7\s\z\2\k\5\w\n\l\a\5\p\v\a\h\e\j\i\j\8\i\3\i\x\1\w\8\d\5\9\n\d\9\x\8\8\e\b\z\j\g\m\k\r\r\n\g\3\5\2\z\j\8\x\r\m\k\f\8\l\6\n\7\v\0\1\e\9\h\u\q\v\p\u\q\x\6\j\b\8\c\z\9\n\w\5\2\w\k\2\y\w\6\f\s\f\p\y\4\f\9\l\b\g\5\9\7\l\3\t\f\b\p\h\6\n\t\w\7\c\a\j\u\9\b\v\4\d\t\r\u\h\t\1\g\r\g\1\1\g\l\t\9\l\q\4\s\g\0\n\k\o\i\d\w\k\p\z\5\8\3\t\x\6\f\r\h\n\4\u\t\v\5\p\w\g\z\1\y\1\f\l\n\c\x\s\0\o\6\9\1\s\k\e\z\e\s\n\2\t\u\z\8\3\q\p\j\x\h\1\s\7\8\n\o\f\g\d\u\z\i\y\q\3\5\k\e\8\w\5\r\u\w\j\g\9\w\j\k\q\8\2\d\w\a\r\o\o\c\u\j\c\s\2\u\w\3\8\w\8\5\t\6\0\a\7\u\4\0\a\s\a\g\a\q\o\k\5\g\x\d\i\t\z\a\m\s\p\h\k\4\a\y\y\7\3\k\h\i\e\5\c\p\y\t\2\5\x\j\u\2\q\q\b\z\s\2\e\0\p\p\z\t\6\g\i\d\1\i\u\l\0\1\w\y\d\v\a\u\y\4\8\g\r\8\9\i\u ]] 00:08:25.043 02:12:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.043 02:12:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:25.043 [2024-11-08 02:12:26.865820] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:25.043 [2024-11-08 02:12:26.865926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73579 ] 00:08:25.301 [2024-11-08 02:12:27.003696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.301 [2024-11-08 02:12:27.035341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.301 [2024-11-08 02:12:27.063929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.301  [2024-11-08T02:12:27.444Z] Copying: 512/512 [B] (average 166 kBps) 00:08:25.560 00:08:25.560 02:12:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ q485v6gyr1crpbylg8sfyywxenkuucmp66h1ydtidzf0v45xuw5ms971ehm6wwzjwhmcz9c67j1x5qa81l6bln7snppe7x6gled0wqz1n7d1grirt2ijr42topast9py6cinb3c7mxk7hhjk845lc63ueoxabex4gzew7sz2k5wnla5pvahejij8i3ix1w8d59nd9x88ebzjgmkrrng352zj8xrmkf8l6n7v01e9huqvpuqx6jb8cz9nw52wk2yw6fsfpy4f9lbg597l3tfbph6ntw7caju9bv4dtruht1grg11glt9lq4sg0nkoidwkpz583tx6frhn4utv5pwgz1y1flncxs0o691skezesn2tuz83qpjxh1s78nofgduziyq35ke8w5ruwjg9wjkq82dwaroocujcs2uw38w85t60a7u40asagaqok5gxditzamsphk4ayy73khie5cpyt25xju2qqbzs2e0ppzt6gid1iul01wydvauy48gr89iu == \q\4\8\5\v\6\g\y\r\1\c\r\p\b\y\l\g\8\s\f\y\y\w\x\e\n\k\u\u\c\m\p\6\6\h\1\y\d\t\i\d\z\f\0\v\4\5\x\u\w\5\m\s\9\7\1\e\h\m\6\w\w\z\j\w\h\m\c\z\9\c\6\7\j\1\x\5\q\a\8\1\l\6\b\l\n\7\s\n\p\p\e\7\x\6\g\l\e\d\0\w\q\z\1\n\7\d\1\g\r\i\r\t\2\i\j\r\4\2\t\o\p\a\s\t\9\p\y\6\c\i\n\b\3\c\7\m\x\k\7\h\h\j\k\8\4\5\l\c\6\3\u\e\o\x\a\b\e\x\4\g\z\e\w\7\s\z\2\k\5\w\n\l\a\5\p\v\a\h\e\j\i\j\8\i\3\i\x\1\w\8\d\5\9\n\d\9\x\8\8\e\b\z\j\g\m\k\r\r\n\g\3\5\2\z\j\8\x\r\m\k\f\8\l\6\n\7\v\0\1\e\9\h\u\q\v\p\u\q\x\6\j\b\8\c\z\9\n\w\5\2\w\k\2\y\w\6\f\s\f\p\y\4\f\9\l\b\g\5\9\7\l\3\t\f\b\p\h\6\n\t\w\7\c\a\j\u\9\b\v\4\d\t\r\u\h\t\1\g\r\g\1\1\g\l\t\9\l\q\4\s\g\0\n\k\o\i\d\w\k\p\z\5\8\3\t\x\6\f\r\h\n\4\u\t\v\5\p\w\g\z\1\y\1\f\l\n\c\x\s\0\o\6\9\1\s\k\e\z\e\s\n\2\t\u\z\8\3\q\p\j\x\h\1\s\7\8\n\o\f\g\d\u\z\i\y\q\3\5\k\e\8\w\5\r\u\w\j\g\9\w\j\k\q\8\2\d\w\a\r\o\o\c\u\j\c\s\2\u\w\3\8\w\8\5\t\6\0\a\7\u\4\0\a\s\a\g\a\q\o\k\5\g\x\d\i\t\z\a\m\s\p\h\k\4\a\y\y\7\3\k\h\i\e\5\c\p\y\t\2\5\x\j\u\2\q\q\b\z\s\2\e\0\p\p\z\t\6\g\i\d\1\i\u\l\0\1\w\y\d\v\a\u\y\4\8\g\r\8\9\i\u ]] 00:08:25.560 02:12:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.560 02:12:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:25.560 [2024-11-08 02:12:27.291735] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:25.560 [2024-11-08 02:12:27.291836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73581 ] 00:08:25.560 [2024-11-08 02:12:27.427246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.820 [2024-11-08 02:12:27.462673] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.820 [2024-11-08 02:12:27.494642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.820  [2024-11-08T02:12:27.704Z] Copying: 512/512 [B] (average 500 kBps) 00:08:25.820 00:08:25.820 02:12:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ q485v6gyr1crpbylg8sfyywxenkuucmp66h1ydtidzf0v45xuw5ms971ehm6wwzjwhmcz9c67j1x5qa81l6bln7snppe7x6gled0wqz1n7d1grirt2ijr42topast9py6cinb3c7mxk7hhjk845lc63ueoxabex4gzew7sz2k5wnla5pvahejij8i3ix1w8d59nd9x88ebzjgmkrrng352zj8xrmkf8l6n7v01e9huqvpuqx6jb8cz9nw52wk2yw6fsfpy4f9lbg597l3tfbph6ntw7caju9bv4dtruht1grg11glt9lq4sg0nkoidwkpz583tx6frhn4utv5pwgz1y1flncxs0o691skezesn2tuz83qpjxh1s78nofgduziyq35ke8w5ruwjg9wjkq82dwaroocujcs2uw38w85t60a7u40asagaqok5gxditzamsphk4ayy73khie5cpyt25xju2qqbzs2e0ppzt6gid1iul01wydvauy48gr89iu == \q\4\8\5\v\6\g\y\r\1\c\r\p\b\y\l\g\8\s\f\y\y\w\x\e\n\k\u\u\c\m\p\6\6\h\1\y\d\t\i\d\z\f\0\v\4\5\x\u\w\5\m\s\9\7\1\e\h\m\6\w\w\z\j\w\h\m\c\z\9\c\6\7\j\1\x\5\q\a\8\1\l\6\b\l\n\7\s\n\p\p\e\7\x\6\g\l\e\d\0\w\q\z\1\n\7\d\1\g\r\i\r\t\2\i\j\r\4\2\t\o\p\a\s\t\9\p\y\6\c\i\n\b\3\c\7\m\x\k\7\h\h\j\k\8\4\5\l\c\6\3\u\e\o\x\a\b\e\x\4\g\z\e\w\7\s\z\2\k\5\w\n\l\a\5\p\v\a\h\e\j\i\j\8\i\3\i\x\1\w\8\d\5\9\n\d\9\x\8\8\e\b\z\j\g\m\k\r\r\n\g\3\5\2\z\j\8\x\r\m\k\f\8\l\6\n\7\v\0\1\e\9\h\u\q\v\p\u\q\x\6\j\b\8\c\z\9\n\w\5\2\w\k\2\y\w\6\f\s\f\p\y\4\f\9\l\b\g\5\9\7\l\3\t\f\b\p\h\6\n\t\w\7\c\a\j\u\9\b\v\4\d\t\r\u\h\t\1\g\r\g\1\1\g\l\t\9\l\q\4\s\g\0\n\k\o\i\d\w\k\p\z\5\8\3\t\x\6\f\r\h\n\4\u\t\v\5\p\w\g\z\1\y\1\f\l\n\c\x\s\0\o\6\9\1\s\k\e\z\e\s\n\2\t\u\z\8\3\q\p\j\x\h\1\s\7\8\n\o\f\g\d\u\z\i\y\q\3\5\k\e\8\w\5\r\u\w\j\g\9\w\j\k\q\8\2\d\w\a\r\o\o\c\u\j\c\s\2\u\w\3\8\w\8\5\t\6\0\a\7\u\4\0\a\s\a\g\a\q\o\k\5\g\x\d\i\t\z\a\m\s\p\h\k\4\a\y\y\7\3\k\h\i\e\5\c\p\y\t\2\5\x\j\u\2\q\q\b\z\s\2\e\0\p\p\z\t\6\g\i\d\1\i\u\l\0\1\w\y\d\v\a\u\y\4\8\g\r\8\9\i\u ]] 00:08:25.820 00:08:25.820 real 0m3.421s 00:08:25.820 user 0m1.671s 00:08:25.820 sys 0m0.766s 00:08:25.820 02:12:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.820 02:12:27 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:25.820 ************************************ 00:08:25.820 END TEST dd_flags_misc_forced_aio 00:08:25.820 ************************************ 00:08:26.079 02:12:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:26.079 02:12:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:26.079 02:12:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:26.079 00:08:26.079 real 0m16.028s 00:08:26.079 user 0m6.837s 00:08:26.079 sys 0m4.505s 00:08:26.079 02:12:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.079 02:12:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:26.079 ************************************ 00:08:26.079 END TEST spdk_dd_posix 00:08:26.079 ************************************ 00:08:26.079 02:12:27 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:26.079 02:12:27 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.079 02:12:27 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.079 02:12:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:26.079 ************************************ 00:08:26.079 START TEST spdk_dd_malloc 00:08:26.079 ************************************ 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:26.079 * Looking for test storage... 00:08:26.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:26.079 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:26.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.080 --rc genhtml_branch_coverage=1 00:08:26.080 --rc genhtml_function_coverage=1 00:08:26.080 --rc genhtml_legend=1 00:08:26.080 --rc geninfo_all_blocks=1 00:08:26.080 --rc geninfo_unexecuted_blocks=1 00:08:26.080 00:08:26.080 ' 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:26.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.080 --rc genhtml_branch_coverage=1 00:08:26.080 --rc genhtml_function_coverage=1 00:08:26.080 --rc genhtml_legend=1 00:08:26.080 --rc geninfo_all_blocks=1 00:08:26.080 --rc geninfo_unexecuted_blocks=1 00:08:26.080 00:08:26.080 ' 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:26.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.080 --rc genhtml_branch_coverage=1 00:08:26.080 --rc genhtml_function_coverage=1 00:08:26.080 --rc genhtml_legend=1 00:08:26.080 --rc geninfo_all_blocks=1 00:08:26.080 --rc geninfo_unexecuted_blocks=1 00:08:26.080 00:08:26.080 ' 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:26.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.080 --rc genhtml_branch_coverage=1 00:08:26.080 --rc genhtml_function_coverage=1 00:08:26.080 --rc genhtml_legend=1 00:08:26.080 --rc geninfo_all_blocks=1 00:08:26.080 --rc geninfo_unexecuted_blocks=1 00:08:26.080 00:08:26.080 ' 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.080 02:12:27 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:26.339 ************************************ 00:08:26.339 START TEST dd_malloc_copy 00:08:26.339 ************************************ 00:08:26.339 02:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:08:26.339 02:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:26.339 02:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:26.339 02:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:26.339 02:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:26.339 02:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:26.339 02:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:26.340 02:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:26.340 02:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:26.340 02:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:26.340 02:12:27 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:26.340 [2024-11-08 02:12:28.021922] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:26.340 [2024-11-08 02:12:28.022032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73663 ] 00:08:26.340 { 00:08:26.340 "subsystems": [ 00:08:26.340 { 00:08:26.340 "subsystem": "bdev", 00:08:26.340 "config": [ 00:08:26.340 { 00:08:26.340 "params": { 00:08:26.340 "block_size": 512, 00:08:26.340 "num_blocks": 1048576, 00:08:26.340 "name": "malloc0" 00:08:26.340 }, 00:08:26.340 "method": "bdev_malloc_create" 00:08:26.340 }, 00:08:26.340 { 00:08:26.340 "params": { 00:08:26.340 "block_size": 512, 00:08:26.340 "num_blocks": 1048576, 00:08:26.340 "name": "malloc1" 00:08:26.340 }, 00:08:26.340 "method": "bdev_malloc_create" 00:08:26.340 }, 00:08:26.340 { 00:08:26.340 "method": "bdev_wait_for_examine" 00:08:26.340 } 00:08:26.340 ] 00:08:26.340 } 00:08:26.340 ] 00:08:26.340 } 00:08:26.340 [2024-11-08 02:12:28.162321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.340 [2024-11-08 02:12:28.205382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.599 [2024-11-08 02:12:28.240359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.978  [2024-11-08T02:12:30.802Z] Copying: 233/512 [MB] (233 MBps) [2024-11-08T02:12:30.802Z] Copying: 467/512 [MB] (234 MBps) [2024-11-08T02:12:31.061Z] Copying: 512/512 [MB] (average 234 MBps) 00:08:29.177 00:08:29.177 02:12:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:29.177 02:12:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:29.177 02:12:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:29.177 02:12:30 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:29.177 [2024-11-08 02:12:30.997914] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:29.177 [2024-11-08 02:12:30.998015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73705 ] 00:08:29.177 { 00:08:29.177 "subsystems": [ 00:08:29.177 { 00:08:29.177 "subsystem": "bdev", 00:08:29.177 "config": [ 00:08:29.177 { 00:08:29.177 "params": { 00:08:29.177 "block_size": 512, 00:08:29.177 "num_blocks": 1048576, 00:08:29.177 "name": "malloc0" 00:08:29.177 }, 00:08:29.177 "method": "bdev_malloc_create" 00:08:29.177 }, 00:08:29.177 { 00:08:29.177 "params": { 00:08:29.177 "block_size": 512, 00:08:29.177 "num_blocks": 1048576, 00:08:29.177 "name": "malloc1" 00:08:29.177 }, 00:08:29.177 "method": "bdev_malloc_create" 00:08:29.177 }, 00:08:29.177 { 00:08:29.177 "method": "bdev_wait_for_examine" 00:08:29.177 } 00:08:29.177 ] 00:08:29.177 } 00:08:29.177 ] 00:08:29.177 } 00:08:29.436 [2024-11-08 02:12:31.135726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.436 [2024-11-08 02:12:31.169497] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.436 [2024-11-08 02:12:31.196916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.813  [2024-11-08T02:12:33.634Z] Copying: 239/512 [MB] (239 MBps) [2024-11-08T02:12:33.634Z] Copying: 483/512 [MB] (244 MBps) [2024-11-08T02:12:33.892Z] Copying: 512/512 [MB] (average 242 MBps) 00:08:32.008 00:08:32.008 00:08:32.008 real 0m5.836s 00:08:32.008 user 0m5.194s 00:08:32.008 sys 0m0.498s 00:08:32.008 02:12:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.008 02:12:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 ************************************ 00:08:32.008 END TEST dd_malloc_copy 00:08:32.008 ************************************ 00:08:32.008 00:08:32.008 real 0m6.077s 00:08:32.008 user 0m5.330s 00:08:32.008 sys 0m0.607s 00:08:32.008 02:12:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.008 02:12:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:32.008 ************************************ 00:08:32.008 END TEST spdk_dd_malloc 00:08:32.008 ************************************ 00:08:32.008 02:12:33 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:32.008 02:12:33 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:32.008 02:12:33 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.008 02:12:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:32.268 ************************************ 00:08:32.268 START TEST spdk_dd_bdev_to_bdev 00:08:32.268 ************************************ 00:08:32.268 02:12:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:32.268 * Looking for test storage... 00:08:32.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:32.268 02:12:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:32.268 02:12:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:08:32.268 02:12:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:32.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.268 --rc genhtml_branch_coverage=1 00:08:32.268 --rc genhtml_function_coverage=1 00:08:32.268 --rc genhtml_legend=1 00:08:32.268 --rc geninfo_all_blocks=1 00:08:32.268 --rc geninfo_unexecuted_blocks=1 00:08:32.268 00:08:32.268 ' 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:32.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.268 --rc genhtml_branch_coverage=1 00:08:32.268 --rc genhtml_function_coverage=1 00:08:32.268 --rc genhtml_legend=1 00:08:32.268 --rc geninfo_all_blocks=1 00:08:32.268 --rc geninfo_unexecuted_blocks=1 00:08:32.268 00:08:32.268 ' 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:32.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.268 --rc genhtml_branch_coverage=1 00:08:32.268 --rc genhtml_function_coverage=1 00:08:32.268 --rc genhtml_legend=1 00:08:32.268 --rc geninfo_all_blocks=1 00:08:32.268 --rc geninfo_unexecuted_blocks=1 00:08:32.268 00:08:32.268 ' 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:32.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.268 --rc genhtml_branch_coverage=1 00:08:32.268 --rc genhtml_function_coverage=1 00:08:32.268 --rc genhtml_legend=1 00:08:32.268 --rc geninfo_all_blocks=1 00:08:32.268 --rc geninfo_unexecuted_blocks=1 00:08:32.268 00:08:32.268 ' 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:32.268 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:32.269 ************************************ 00:08:32.269 START TEST dd_inflate_file 00:08:32.269 ************************************ 00:08:32.269 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:32.269 [2024-11-08 02:12:34.149476] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:32.269 [2024-11-08 02:12:34.149599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73812 ] 00:08:32.528 [2024-11-08 02:12:34.288377] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.528 [2024-11-08 02:12:34.321645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.528 [2024-11-08 02:12:34.347567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.528  [2024-11-08T02:12:34.671Z] Copying: 64/64 [MB] (average 1600 MBps) 00:08:32.787 00:08:32.787 00:08:32.787 real 0m0.448s 00:08:32.787 user 0m0.248s 00:08:32.787 sys 0m0.217s 00:08:32.787 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.787 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:32.787 ************************************ 00:08:32.787 END TEST dd_inflate_file 00:08:32.787 ************************************ 00:08:32.787 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:32.787 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:32.787 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:32.787 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:32.787 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:32.787 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:32.787 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.787 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:32.787 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:32.787 ************************************ 00:08:32.787 START TEST dd_copy_to_out_bdev 00:08:32.787 ************************************ 00:08:32.787 02:12:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:32.787 { 00:08:32.787 "subsystems": [ 00:08:32.787 { 00:08:32.787 "subsystem": "bdev", 00:08:32.787 "config": [ 00:08:32.787 { 00:08:32.787 "params": { 00:08:32.787 "trtype": "pcie", 00:08:32.787 "traddr": "0000:00:10.0", 00:08:32.787 "name": "Nvme0" 00:08:32.787 }, 00:08:32.787 "method": "bdev_nvme_attach_controller" 00:08:32.787 }, 00:08:32.787 { 00:08:32.787 "params": { 00:08:32.787 "trtype": "pcie", 00:08:32.787 "traddr": "0000:00:11.0", 00:08:32.787 "name": "Nvme1" 00:08:32.787 }, 00:08:32.787 "method": "bdev_nvme_attach_controller" 00:08:32.787 }, 00:08:32.787 { 00:08:32.787 "method": "bdev_wait_for_examine" 00:08:32.787 } 00:08:32.787 ] 00:08:32.787 } 00:08:32.787 ] 00:08:32.787 } 00:08:32.787 [2024-11-08 02:12:34.654921] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:32.787 [2024-11-08 02:12:34.655031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73840 ] 00:08:33.046 [2024-11-08 02:12:34.795029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.046 [2024-11-08 02:12:34.827063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.046 [2024-11-08 02:12:34.856760] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.422  [2024-11-08T02:12:36.306Z] Copying: 49/64 [MB] (49 MBps) [2024-11-08T02:12:36.564Z] Copying: 64/64 [MB] (average 50 MBps) 00:08:34.680 00:08:34.680 ************************************ 00:08:34.680 END TEST dd_copy_to_out_bdev 00:08:34.680 ************************************ 00:08:34.680 00:08:34.680 real 0m1.868s 00:08:34.680 user 0m1.687s 00:08:34.680 sys 0m1.513s 00:08:34.680 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.680 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:34.680 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:34.680 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:34.680 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.680 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.680 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:34.680 ************************************ 00:08:34.681 START TEST dd_offset_magic 00:08:34.681 ************************************ 00:08:34.681 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:08:34.681 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:34.681 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:34.681 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:34.681 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:34.681 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:34.681 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:34.681 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:34.681 02:12:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:34.939 [2024-11-08 02:12:36.570417] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:34.939 [2024-11-08 02:12:36.570518] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73885 ] 00:08:34.939 { 00:08:34.939 "subsystems": [ 00:08:34.939 { 00:08:34.939 "subsystem": "bdev", 00:08:34.939 "config": [ 00:08:34.939 { 00:08:34.939 "params": { 00:08:34.939 "trtype": "pcie", 00:08:34.939 "traddr": "0000:00:10.0", 00:08:34.939 "name": "Nvme0" 00:08:34.939 }, 00:08:34.939 "method": "bdev_nvme_attach_controller" 00:08:34.939 }, 00:08:34.939 { 00:08:34.939 "params": { 00:08:34.939 "trtype": "pcie", 00:08:34.939 "traddr": "0000:00:11.0", 00:08:34.939 "name": "Nvme1" 00:08:34.939 }, 00:08:34.939 "method": "bdev_nvme_attach_controller" 00:08:34.939 }, 00:08:34.939 { 00:08:34.939 "method": "bdev_wait_for_examine" 00:08:34.939 } 00:08:34.939 ] 00:08:34.939 } 00:08:34.939 ] 00:08:34.939 } 00:08:34.939 [2024-11-08 02:12:36.701908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.939 [2024-11-08 02:12:36.736532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.939 [2024-11-08 02:12:36.763040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.198  [2024-11-08T02:12:37.342Z] Copying: 65/65 [MB] (average 970 MBps) 00:08:35.458 00:08:35.458 02:12:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:35.458 02:12:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:35.458 02:12:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:35.458 02:12:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:35.458 [2024-11-08 02:12:37.206665] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:35.458 [2024-11-08 02:12:37.206763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73905 ] 00:08:35.458 { 00:08:35.458 "subsystems": [ 00:08:35.458 { 00:08:35.458 "subsystem": "bdev", 00:08:35.458 "config": [ 00:08:35.458 { 00:08:35.458 "params": { 00:08:35.458 "trtype": "pcie", 00:08:35.458 "traddr": "0000:00:10.0", 00:08:35.458 "name": "Nvme0" 00:08:35.458 }, 00:08:35.458 "method": "bdev_nvme_attach_controller" 00:08:35.458 }, 00:08:35.458 { 00:08:35.458 "params": { 00:08:35.458 "trtype": "pcie", 00:08:35.458 "traddr": "0000:00:11.0", 00:08:35.458 "name": "Nvme1" 00:08:35.458 }, 00:08:35.458 "method": "bdev_nvme_attach_controller" 00:08:35.458 }, 00:08:35.458 { 00:08:35.458 "method": "bdev_wait_for_examine" 00:08:35.458 } 00:08:35.458 ] 00:08:35.458 } 00:08:35.458 ] 00:08:35.458 } 00:08:35.718 [2024-11-08 02:12:37.345188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.718 [2024-11-08 02:12:37.376885] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.718 [2024-11-08 02:12:37.403537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.718  [2024-11-08T02:12:37.861Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:35.977 00:08:35.977 02:12:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:35.977 02:12:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:35.977 02:12:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:35.977 02:12:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:35.977 02:12:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:35.977 02:12:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:35.977 02:12:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:35.977 [2024-11-08 02:12:37.744763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:35.977 [2024-11-08 02:12:37.744871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73916 ] 00:08:35.977 { 00:08:35.977 "subsystems": [ 00:08:35.977 { 00:08:35.977 "subsystem": "bdev", 00:08:35.977 "config": [ 00:08:35.977 { 00:08:35.977 "params": { 00:08:35.977 "trtype": "pcie", 00:08:35.977 "traddr": "0000:00:10.0", 00:08:35.977 "name": "Nvme0" 00:08:35.977 }, 00:08:35.977 "method": "bdev_nvme_attach_controller" 00:08:35.977 }, 00:08:35.977 { 00:08:35.977 "params": { 00:08:35.977 "trtype": "pcie", 00:08:35.977 "traddr": "0000:00:11.0", 00:08:35.977 "name": "Nvme1" 00:08:35.977 }, 00:08:35.977 "method": "bdev_nvme_attach_controller" 00:08:35.977 }, 00:08:35.977 { 00:08:35.977 "method": "bdev_wait_for_examine" 00:08:35.977 } 00:08:35.977 ] 00:08:35.977 } 00:08:35.977 ] 00:08:35.977 } 00:08:36.236 [2024-11-08 02:12:37.884072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.236 [2024-11-08 02:12:37.915670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.236 [2024-11-08 02:12:37.942343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.495  [2024-11-08T02:12:38.379Z] Copying: 65/65 [MB] (average 1031 MBps) 00:08:36.495 00:08:36.495 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:36.495 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:36.495 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:36.495 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:36.754 [2024-11-08 02:12:38.383583] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:36.754 [2024-11-08 02:12:38.383683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73936 ] 00:08:36.754 { 00:08:36.754 "subsystems": [ 00:08:36.754 { 00:08:36.754 "subsystem": "bdev", 00:08:36.754 "config": [ 00:08:36.754 { 00:08:36.754 "params": { 00:08:36.754 "trtype": "pcie", 00:08:36.754 "traddr": "0000:00:10.0", 00:08:36.754 "name": "Nvme0" 00:08:36.754 }, 00:08:36.754 "method": "bdev_nvme_attach_controller" 00:08:36.754 }, 00:08:36.754 { 00:08:36.754 "params": { 00:08:36.754 "trtype": "pcie", 00:08:36.754 "traddr": "0000:00:11.0", 00:08:36.754 "name": "Nvme1" 00:08:36.754 }, 00:08:36.754 "method": "bdev_nvme_attach_controller" 00:08:36.754 }, 00:08:36.754 { 00:08:36.754 "method": "bdev_wait_for_examine" 00:08:36.754 } 00:08:36.754 ] 00:08:36.754 } 00:08:36.754 ] 00:08:36.754 } 00:08:36.754 [2024-11-08 02:12:38.524058] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.754 [2024-11-08 02:12:38.556655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.754 [2024-11-08 02:12:38.584531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.014  [2024-11-08T02:12:38.898Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:37.014 00:08:37.014 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:37.014 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:37.014 00:08:37.014 real 0m2.367s 00:08:37.014 user 0m1.783s 00:08:37.014 sys 0m0.593s 00:08:37.014 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.014 ************************************ 00:08:37.014 END TEST dd_offset_magic 00:08:37.014 ************************************ 00:08:37.014 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:37.273 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:37.273 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:37.273 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:37.273 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:37.273 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:37.273 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:37.273 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:37.273 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:37.273 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:37.273 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:37.273 02:12:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:37.273 [2024-11-08 02:12:38.991907] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:37.273 [2024-11-08 02:12:38.992007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73968 ] 00:08:37.273 { 00:08:37.273 "subsystems": [ 00:08:37.273 { 00:08:37.273 "subsystem": "bdev", 00:08:37.273 "config": [ 00:08:37.273 { 00:08:37.273 "params": { 00:08:37.273 "trtype": "pcie", 00:08:37.273 "traddr": "0000:00:10.0", 00:08:37.273 "name": "Nvme0" 00:08:37.273 }, 00:08:37.273 "method": "bdev_nvme_attach_controller" 00:08:37.273 }, 00:08:37.273 { 00:08:37.274 "params": { 00:08:37.274 "trtype": "pcie", 00:08:37.274 "traddr": "0000:00:11.0", 00:08:37.274 "name": "Nvme1" 00:08:37.274 }, 00:08:37.274 "method": "bdev_nvme_attach_controller" 00:08:37.274 }, 00:08:37.274 { 00:08:37.274 "method": "bdev_wait_for_examine" 00:08:37.274 } 00:08:37.274 ] 00:08:37.274 } 00:08:37.274 ] 00:08:37.274 } 00:08:37.274 [2024-11-08 02:12:39.129043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.533 [2024-11-08 02:12:39.160303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.533 [2024-11-08 02:12:39.186130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.533  [2024-11-08T02:12:39.676Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:37.792 00:08:37.792 02:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:37.792 02:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:37.792 02:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:37.792 02:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:37.792 02:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:37.792 02:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:37.792 02:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:37.792 02:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:37.792 02:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:37.792 02:12:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:37.792 [2024-11-08 02:12:39.533222] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:37.792 [2024-11-08 02:12:39.533319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73983 ] 00:08:37.792 { 00:08:37.792 "subsystems": [ 00:08:37.792 { 00:08:37.792 "subsystem": "bdev", 00:08:37.792 "config": [ 00:08:37.792 { 00:08:37.792 "params": { 00:08:37.792 "trtype": "pcie", 00:08:37.792 "traddr": "0000:00:10.0", 00:08:37.792 "name": "Nvme0" 00:08:37.792 }, 00:08:37.793 "method": "bdev_nvme_attach_controller" 00:08:37.793 }, 00:08:37.793 { 00:08:37.793 "params": { 00:08:37.793 "trtype": "pcie", 00:08:37.793 "traddr": "0000:00:11.0", 00:08:37.793 "name": "Nvme1" 00:08:37.793 }, 00:08:37.793 "method": "bdev_nvme_attach_controller" 00:08:37.793 }, 00:08:37.793 { 00:08:37.793 "method": "bdev_wait_for_examine" 00:08:37.793 } 00:08:37.793 ] 00:08:37.793 } 00:08:37.793 ] 00:08:37.793 } 00:08:37.793 [2024-11-08 02:12:39.672348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.052 [2024-11-08 02:12:39.703802] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.052 [2024-11-08 02:12:39.730896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.052  [2024-11-08T02:12:40.195Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:38.311 00:08:38.311 02:12:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:38.311 ************************************ 00:08:38.311 END TEST spdk_dd_bdev_to_bdev 00:08:38.311 ************************************ 00:08:38.311 00:08:38.311 real 0m6.150s 00:08:38.311 user 0m4.681s 00:08:38.311 sys 0m2.853s 00:08:38.311 02:12:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.311 02:12:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:38.311 02:12:40 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:38.311 02:12:40 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:38.311 02:12:40 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.311 02:12:40 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.311 02:12:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:38.311 ************************************ 00:08:38.311 START TEST spdk_dd_uring 00:08:38.311 ************************************ 00:08:38.311 02:12:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:38.311 * Looking for test storage... 00:08:38.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:38.311 02:12:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:38.311 02:12:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:38.311 02:12:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:38.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.570 --rc genhtml_branch_coverage=1 00:08:38.570 --rc genhtml_function_coverage=1 00:08:38.570 --rc genhtml_legend=1 00:08:38.570 --rc geninfo_all_blocks=1 00:08:38.570 --rc geninfo_unexecuted_blocks=1 00:08:38.570 00:08:38.570 ' 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:38.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.570 --rc genhtml_branch_coverage=1 00:08:38.570 --rc genhtml_function_coverage=1 00:08:38.570 --rc genhtml_legend=1 00:08:38.570 --rc geninfo_all_blocks=1 00:08:38.570 --rc geninfo_unexecuted_blocks=1 00:08:38.570 00:08:38.570 ' 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:38.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.570 --rc genhtml_branch_coverage=1 00:08:38.570 --rc genhtml_function_coverage=1 00:08:38.570 --rc genhtml_legend=1 00:08:38.570 --rc geninfo_all_blocks=1 00:08:38.570 --rc geninfo_unexecuted_blocks=1 00:08:38.570 00:08:38.570 ' 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:38.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.570 --rc genhtml_branch_coverage=1 00:08:38.570 --rc genhtml_function_coverage=1 00:08:38.570 --rc genhtml_legend=1 00:08:38.570 --rc geninfo_all_blocks=1 00:08:38.570 --rc geninfo_unexecuted_blocks=1 00:08:38.570 00:08:38.570 ' 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.570 02:12:40 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:38.571 ************************************ 00:08:38.571 START TEST dd_uring_copy 00:08:38.571 ************************************ 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=izsbufxvd2kthioeqxt1ny662xfgz8sdrr60dm1ycjucvum4fb8zot97fq7nactshz885wq0q4x1l02y93scl5da28fbrr563gx12rki9hgg98vxoqrf35n224a7tyl8a7vsz9jfuczv3pz4g0j47yjskuribgm7mrhp1wpkmjvdowvrnft3b63u5b7be8rtxo4llnassptv9c3ciuu7jf5r78zn4pn194fpj9vyi371exybua6dhv69ndtbm8kj8o67m3me1jcihoskhk9sgmmjaj1u1ejjj2c4ug2rkuk8y803jkqej8s3s0eob4b7h7liyxttv4g7egltu39y5l6uizpj1nvta0ijdwny2zbi2k2ki3nr8779q6sobznvbyxadoqdyef81001ssteayqdrtnfmcnfeb3zh8bu7c9w9bgjwyf73ui8y4q4um8yq9150sbfyj8mh36iipzdl260jooc821mk28x3k6tnom2tql1dxm42w6j6zdijui1u2g8cpatrykat16m4nfzc43941dup7wn3h5vx4izic951dwk64yhczfuxcnqyukhmwnt664fmf5f2zj9asky8ut6xjcpbdjoefmu37zzs7nsuoae3b0daqaunmy4i06e80nz7etopy3j4njzg55lghl5f4mrlo20i8wava87as1ma1re5gjot51gc66t63mltz4t66eemnvzspugnx2sosksg9nvljucpanlfov2otyvmwm1dfff2d3tfxu5m48625rq81o4cm91hd8rsc51mh6v5u3suip3feax5zoii31t58frtag0ji8qi8xjiatbgj0t450cumcic6n8ue1nbqrrwblj8ubo91jblb5ghxabp232yvsj2pfycbnumct3c90wjtfpt4913ba6xqaeeq1gf58lxd2sp93rj0rwmg59lx95kexq72qyi5qrb3sezh0p9w565jsl4457818y6w6jj094lgot277rxcfdjyo9vh15elso63os0fz0nd9b 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo izsbufxvd2kthioeqxt1ny662xfgz8sdrr60dm1ycjucvum4fb8zot97fq7nactshz885wq0q4x1l02y93scl5da28fbrr563gx12rki9hgg98vxoqrf35n224a7tyl8a7vsz9jfuczv3pz4g0j47yjskuribgm7mrhp1wpkmjvdowvrnft3b63u5b7be8rtxo4llnassptv9c3ciuu7jf5r78zn4pn194fpj9vyi371exybua6dhv69ndtbm8kj8o67m3me1jcihoskhk9sgmmjaj1u1ejjj2c4ug2rkuk8y803jkqej8s3s0eob4b7h7liyxttv4g7egltu39y5l6uizpj1nvta0ijdwny2zbi2k2ki3nr8779q6sobznvbyxadoqdyef81001ssteayqdrtnfmcnfeb3zh8bu7c9w9bgjwyf73ui8y4q4um8yq9150sbfyj8mh36iipzdl260jooc821mk28x3k6tnom2tql1dxm42w6j6zdijui1u2g8cpatrykat16m4nfzc43941dup7wn3h5vx4izic951dwk64yhczfuxcnqyukhmwnt664fmf5f2zj9asky8ut6xjcpbdjoefmu37zzs7nsuoae3b0daqaunmy4i06e80nz7etopy3j4njzg55lghl5f4mrlo20i8wava87as1ma1re5gjot51gc66t63mltz4t66eemnvzspugnx2sosksg9nvljucpanlfov2otyvmwm1dfff2d3tfxu5m48625rq81o4cm91hd8rsc51mh6v5u3suip3feax5zoii31t58frtag0ji8qi8xjiatbgj0t450cumcic6n8ue1nbqrrwblj8ubo91jblb5ghxabp232yvsj2pfycbnumct3c90wjtfpt4913ba6xqaeeq1gf58lxd2sp93rj0rwmg59lx95kexq72qyi5qrb3sezh0p9w565jsl4457818y6w6jj094lgot277rxcfdjyo9vh15elso63os0fz0nd9b 00:08:38.571 02:12:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:38.571 [2024-11-08 02:12:40.380594] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:38.571 [2024-11-08 02:12:40.380886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74061 ] 00:08:38.830 [2024-11-08 02:12:40.519830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.830 [2024-11-08 02:12:40.550547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.830 [2024-11-08 02:12:40.576319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.397  [2024-11-08T02:12:41.281Z] Copying: 511/511 [MB] (average 1599 MBps) 00:08:39.397 00:08:39.397 02:12:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:39.397 02:12:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:39.397 02:12:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:39.397 02:12:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:39.659 [2024-11-08 02:12:41.297949] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:39.659 [2024-11-08 02:12:41.298020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74077 ] 00:08:39.659 { 00:08:39.659 "subsystems": [ 00:08:39.659 { 00:08:39.659 "subsystem": "bdev", 00:08:39.659 "config": [ 00:08:39.659 { 00:08:39.659 "params": { 00:08:39.659 "block_size": 512, 00:08:39.659 "num_blocks": 1048576, 00:08:39.659 "name": "malloc0" 00:08:39.659 }, 00:08:39.659 "method": "bdev_malloc_create" 00:08:39.659 }, 00:08:39.659 { 00:08:39.659 "params": { 00:08:39.659 "filename": "/dev/zram1", 00:08:39.659 "name": "uring0" 00:08:39.659 }, 00:08:39.659 "method": "bdev_uring_create" 00:08:39.659 }, 00:08:39.659 { 00:08:39.659 "method": "bdev_wait_for_examine" 00:08:39.659 } 00:08:39.659 ] 00:08:39.659 } 00:08:39.659 ] 00:08:39.659 } 00:08:39.659 [2024-11-08 02:12:41.431274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.659 [2024-11-08 02:12:41.461869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.659 [2024-11-08 02:12:41.488235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.058  [2024-11-08T02:12:43.879Z] Copying: 245/512 [MB] (245 MBps) [2024-11-08T02:12:43.879Z] Copying: 487/512 [MB] (242 MBps) [2024-11-08T02:12:44.138Z] Copying: 512/512 [MB] (average 243 MBps) 00:08:42.254 00:08:42.254 02:12:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:42.254 02:12:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:42.254 02:12:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:42.254 02:12:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:42.254 { 00:08:42.254 "subsystems": [ 00:08:42.254 { 00:08:42.254 "subsystem": "bdev", 00:08:42.254 "config": [ 00:08:42.254 { 00:08:42.254 "params": { 00:08:42.254 "block_size": 512, 00:08:42.254 "num_blocks": 1048576, 00:08:42.254 "name": "malloc0" 00:08:42.254 }, 00:08:42.254 "method": "bdev_malloc_create" 00:08:42.254 }, 00:08:42.254 { 00:08:42.254 "params": { 00:08:42.254 "filename": "/dev/zram1", 00:08:42.254 "name": "uring0" 00:08:42.254 }, 00:08:42.254 "method": "bdev_uring_create" 00:08:42.254 }, 00:08:42.254 { 00:08:42.254 "method": "bdev_wait_for_examine" 00:08:42.254 } 00:08:42.254 ] 00:08:42.254 } 00:08:42.254 ] 00:08:42.254 } 00:08:42.255 [2024-11-08 02:12:43.990235] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:42.255 [2024-11-08 02:12:43.991009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74116 ] 00:08:42.255 [2024-11-08 02:12:44.127681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.514 [2024-11-08 02:12:44.160280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.514 [2024-11-08 02:12:44.187101] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.450  [2024-11-08T02:12:46.709Z] Copying: 200/512 [MB] (200 MBps) [2024-11-08T02:12:47.276Z] Copying: 380/512 [MB] (180 MBps) [2024-11-08T02:12:47.276Z] Copying: 512/512 [MB] (average 187 MBps) 00:08:45.392 00:08:45.392 02:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:45.392 02:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ izsbufxvd2kthioeqxt1ny662xfgz8sdrr60dm1ycjucvum4fb8zot97fq7nactshz885wq0q4x1l02y93scl5da28fbrr563gx12rki9hgg98vxoqrf35n224a7tyl8a7vsz9jfuczv3pz4g0j47yjskuribgm7mrhp1wpkmjvdowvrnft3b63u5b7be8rtxo4llnassptv9c3ciuu7jf5r78zn4pn194fpj9vyi371exybua6dhv69ndtbm8kj8o67m3me1jcihoskhk9sgmmjaj1u1ejjj2c4ug2rkuk8y803jkqej8s3s0eob4b7h7liyxttv4g7egltu39y5l6uizpj1nvta0ijdwny2zbi2k2ki3nr8779q6sobznvbyxadoqdyef81001ssteayqdrtnfmcnfeb3zh8bu7c9w9bgjwyf73ui8y4q4um8yq9150sbfyj8mh36iipzdl260jooc821mk28x3k6tnom2tql1dxm42w6j6zdijui1u2g8cpatrykat16m4nfzc43941dup7wn3h5vx4izic951dwk64yhczfuxcnqyukhmwnt664fmf5f2zj9asky8ut6xjcpbdjoefmu37zzs7nsuoae3b0daqaunmy4i06e80nz7etopy3j4njzg55lghl5f4mrlo20i8wava87as1ma1re5gjot51gc66t63mltz4t66eemnvzspugnx2sosksg9nvljucpanlfov2otyvmwm1dfff2d3tfxu5m48625rq81o4cm91hd8rsc51mh6v5u3suip3feax5zoii31t58frtag0ji8qi8xjiatbgj0t450cumcic6n8ue1nbqrrwblj8ubo91jblb5ghxabp232yvsj2pfycbnumct3c90wjtfpt4913ba6xqaeeq1gf58lxd2sp93rj0rwmg59lx95kexq72qyi5qrb3sezh0p9w565jsl4457818y6w6jj094lgot277rxcfdjyo9vh15elso63os0fz0nd9b == \i\z\s\b\u\f\x\v\d\2\k\t\h\i\o\e\q\x\t\1\n\y\6\6\2\x\f\g\z\8\s\d\r\r\6\0\d\m\1\y\c\j\u\c\v\u\m\4\f\b\8\z\o\t\9\7\f\q\7\n\a\c\t\s\h\z\8\8\5\w\q\0\q\4\x\1\l\0\2\y\9\3\s\c\l\5\d\a\2\8\f\b\r\r\5\6\3\g\x\1\2\r\k\i\9\h\g\g\9\8\v\x\o\q\r\f\3\5\n\2\2\4\a\7\t\y\l\8\a\7\v\s\z\9\j\f\u\c\z\v\3\p\z\4\g\0\j\4\7\y\j\s\k\u\r\i\b\g\m\7\m\r\h\p\1\w\p\k\m\j\v\d\o\w\v\r\n\f\t\3\b\6\3\u\5\b\7\b\e\8\r\t\x\o\4\l\l\n\a\s\s\p\t\v\9\c\3\c\i\u\u\7\j\f\5\r\7\8\z\n\4\p\n\1\9\4\f\p\j\9\v\y\i\3\7\1\e\x\y\b\u\a\6\d\h\v\6\9\n\d\t\b\m\8\k\j\8\o\6\7\m\3\m\e\1\j\c\i\h\o\s\k\h\k\9\s\g\m\m\j\a\j\1\u\1\e\j\j\j\2\c\4\u\g\2\r\k\u\k\8\y\8\0\3\j\k\q\e\j\8\s\3\s\0\e\o\b\4\b\7\h\7\l\i\y\x\t\t\v\4\g\7\e\g\l\t\u\3\9\y\5\l\6\u\i\z\p\j\1\n\v\t\a\0\i\j\d\w\n\y\2\z\b\i\2\k\2\k\i\3\n\r\8\7\7\9\q\6\s\o\b\z\n\v\b\y\x\a\d\o\q\d\y\e\f\8\1\0\0\1\s\s\t\e\a\y\q\d\r\t\n\f\m\c\n\f\e\b\3\z\h\8\b\u\7\c\9\w\9\b\g\j\w\y\f\7\3\u\i\8\y\4\q\4\u\m\8\y\q\9\1\5\0\s\b\f\y\j\8\m\h\3\6\i\i\p\z\d\l\2\6\0\j\o\o\c\8\2\1\m\k\2\8\x\3\k\6\t\n\o\m\2\t\q\l\1\d\x\m\4\2\w\6\j\6\z\d\i\j\u\i\1\u\2\g\8\c\p\a\t\r\y\k\a\t\1\6\m\4\n\f\z\c\4\3\9\4\1\d\u\p\7\w\n\3\h\5\v\x\4\i\z\i\c\9\5\1\d\w\k\6\4\y\h\c\z\f\u\x\c\n\q\y\u\k\h\m\w\n\t\6\6\4\f\m\f\5\f\2\z\j\9\a\s\k\y\8\u\t\6\x\j\c\p\b\d\j\o\e\f\m\u\3\7\z\z\s\7\n\s\u\o\a\e\3\b\0\d\a\q\a\u\n\m\y\4\i\0\6\e\8\0\n\z\7\e\t\o\p\y\3\j\4\n\j\z\g\5\5\l\g\h\l\5\f\4\m\r\l\o\2\0\i\8\w\a\v\a\8\7\a\s\1\m\a\1\r\e\5\g\j\o\t\5\1\g\c\6\6\t\6\3\m\l\t\z\4\t\6\6\e\e\m\n\v\z\s\p\u\g\n\x\2\s\o\s\k\s\g\9\n\v\l\j\u\c\p\a\n\l\f\o\v\2\o\t\y\v\m\w\m\1\d\f\f\f\2\d\3\t\f\x\u\5\m\4\8\6\2\5\r\q\8\1\o\4\c\m\9\1\h\d\8\r\s\c\5\1\m\h\6\v\5\u\3\s\u\i\p\3\f\e\a\x\5\z\o\i\i\3\1\t\5\8\f\r\t\a\g\0\j\i\8\q\i\8\x\j\i\a\t\b\g\j\0\t\4\5\0\c\u\m\c\i\c\6\n\8\u\e\1\n\b\q\r\r\w\b\l\j\8\u\b\o\9\1\j\b\l\b\5\g\h\x\a\b\p\2\3\2\y\v\s\j\2\p\f\y\c\b\n\u\m\c\t\3\c\9\0\w\j\t\f\p\t\4\9\1\3\b\a\6\x\q\a\e\e\q\1\g\f\5\8\l\x\d\2\s\p\9\3\r\j\0\r\w\m\g\5\9\l\x\9\5\k\e\x\q\7\2\q\y\i\5\q\r\b\3\s\e\z\h\0\p\9\w\5\6\5\j\s\l\4\4\5\7\8\1\8\y\6\w\6\j\j\0\9\4\l\g\o\t\2\7\7\r\x\c\f\d\j\y\o\9\v\h\1\5\e\l\s\o\6\3\o\s\0\f\z\0\n\d\9\b ]] 00:08:45.392 02:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:45.392 02:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ izsbufxvd2kthioeqxt1ny662xfgz8sdrr60dm1ycjucvum4fb8zot97fq7nactshz885wq0q4x1l02y93scl5da28fbrr563gx12rki9hgg98vxoqrf35n224a7tyl8a7vsz9jfuczv3pz4g0j47yjskuribgm7mrhp1wpkmjvdowvrnft3b63u5b7be8rtxo4llnassptv9c3ciuu7jf5r78zn4pn194fpj9vyi371exybua6dhv69ndtbm8kj8o67m3me1jcihoskhk9sgmmjaj1u1ejjj2c4ug2rkuk8y803jkqej8s3s0eob4b7h7liyxttv4g7egltu39y5l6uizpj1nvta0ijdwny2zbi2k2ki3nr8779q6sobznvbyxadoqdyef81001ssteayqdrtnfmcnfeb3zh8bu7c9w9bgjwyf73ui8y4q4um8yq9150sbfyj8mh36iipzdl260jooc821mk28x3k6tnom2tql1dxm42w6j6zdijui1u2g8cpatrykat16m4nfzc43941dup7wn3h5vx4izic951dwk64yhczfuxcnqyukhmwnt664fmf5f2zj9asky8ut6xjcpbdjoefmu37zzs7nsuoae3b0daqaunmy4i06e80nz7etopy3j4njzg55lghl5f4mrlo20i8wava87as1ma1re5gjot51gc66t63mltz4t66eemnvzspugnx2sosksg9nvljucpanlfov2otyvmwm1dfff2d3tfxu5m48625rq81o4cm91hd8rsc51mh6v5u3suip3feax5zoii31t58frtag0ji8qi8xjiatbgj0t450cumcic6n8ue1nbqrrwblj8ubo91jblb5ghxabp232yvsj2pfycbnumct3c90wjtfpt4913ba6xqaeeq1gf58lxd2sp93rj0rwmg59lx95kexq72qyi5qrb3sezh0p9w565jsl4457818y6w6jj094lgot277rxcfdjyo9vh15elso63os0fz0nd9b == \i\z\s\b\u\f\x\v\d\2\k\t\h\i\o\e\q\x\t\1\n\y\6\6\2\x\f\g\z\8\s\d\r\r\6\0\d\m\1\y\c\j\u\c\v\u\m\4\f\b\8\z\o\t\9\7\f\q\7\n\a\c\t\s\h\z\8\8\5\w\q\0\q\4\x\1\l\0\2\y\9\3\s\c\l\5\d\a\2\8\f\b\r\r\5\6\3\g\x\1\2\r\k\i\9\h\g\g\9\8\v\x\o\q\r\f\3\5\n\2\2\4\a\7\t\y\l\8\a\7\v\s\z\9\j\f\u\c\z\v\3\p\z\4\g\0\j\4\7\y\j\s\k\u\r\i\b\g\m\7\m\r\h\p\1\w\p\k\m\j\v\d\o\w\v\r\n\f\t\3\b\6\3\u\5\b\7\b\e\8\r\t\x\o\4\l\l\n\a\s\s\p\t\v\9\c\3\c\i\u\u\7\j\f\5\r\7\8\z\n\4\p\n\1\9\4\f\p\j\9\v\y\i\3\7\1\e\x\y\b\u\a\6\d\h\v\6\9\n\d\t\b\m\8\k\j\8\o\6\7\m\3\m\e\1\j\c\i\h\o\s\k\h\k\9\s\g\m\m\j\a\j\1\u\1\e\j\j\j\2\c\4\u\g\2\r\k\u\k\8\y\8\0\3\j\k\q\e\j\8\s\3\s\0\e\o\b\4\b\7\h\7\l\i\y\x\t\t\v\4\g\7\e\g\l\t\u\3\9\y\5\l\6\u\i\z\p\j\1\n\v\t\a\0\i\j\d\w\n\y\2\z\b\i\2\k\2\k\i\3\n\r\8\7\7\9\q\6\s\o\b\z\n\v\b\y\x\a\d\o\q\d\y\e\f\8\1\0\0\1\s\s\t\e\a\y\q\d\r\t\n\f\m\c\n\f\e\b\3\z\h\8\b\u\7\c\9\w\9\b\g\j\w\y\f\7\3\u\i\8\y\4\q\4\u\m\8\y\q\9\1\5\0\s\b\f\y\j\8\m\h\3\6\i\i\p\z\d\l\2\6\0\j\o\o\c\8\2\1\m\k\2\8\x\3\k\6\t\n\o\m\2\t\q\l\1\d\x\m\4\2\w\6\j\6\z\d\i\j\u\i\1\u\2\g\8\c\p\a\t\r\y\k\a\t\1\6\m\4\n\f\z\c\4\3\9\4\1\d\u\p\7\w\n\3\h\5\v\x\4\i\z\i\c\9\5\1\d\w\k\6\4\y\h\c\z\f\u\x\c\n\q\y\u\k\h\m\w\n\t\6\6\4\f\m\f\5\f\2\z\j\9\a\s\k\y\8\u\t\6\x\j\c\p\b\d\j\o\e\f\m\u\3\7\z\z\s\7\n\s\u\o\a\e\3\b\0\d\a\q\a\u\n\m\y\4\i\0\6\e\8\0\n\z\7\e\t\o\p\y\3\j\4\n\j\z\g\5\5\l\g\h\l\5\f\4\m\r\l\o\2\0\i\8\w\a\v\a\8\7\a\s\1\m\a\1\r\e\5\g\j\o\t\5\1\g\c\6\6\t\6\3\m\l\t\z\4\t\6\6\e\e\m\n\v\z\s\p\u\g\n\x\2\s\o\s\k\s\g\9\n\v\l\j\u\c\p\a\n\l\f\o\v\2\o\t\y\v\m\w\m\1\d\f\f\f\2\d\3\t\f\x\u\5\m\4\8\6\2\5\r\q\8\1\o\4\c\m\9\1\h\d\8\r\s\c\5\1\m\h\6\v\5\u\3\s\u\i\p\3\f\e\a\x\5\z\o\i\i\3\1\t\5\8\f\r\t\a\g\0\j\i\8\q\i\8\x\j\i\a\t\b\g\j\0\t\4\5\0\c\u\m\c\i\c\6\n\8\u\e\1\n\b\q\r\r\w\b\l\j\8\u\b\o\9\1\j\b\l\b\5\g\h\x\a\b\p\2\3\2\y\v\s\j\2\p\f\y\c\b\n\u\m\c\t\3\c\9\0\w\j\t\f\p\t\4\9\1\3\b\a\6\x\q\a\e\e\q\1\g\f\5\8\l\x\d\2\s\p\9\3\r\j\0\r\w\m\g\5\9\l\x\9\5\k\e\x\q\7\2\q\y\i\5\q\r\b\3\s\e\z\h\0\p\9\w\5\6\5\j\s\l\4\4\5\7\8\1\8\y\6\w\6\j\j\0\9\4\l\g\o\t\2\7\7\r\x\c\f\d\j\y\o\9\v\h\1\5\e\l\s\o\6\3\o\s\0\f\z\0\n\d\9\b ]] 00:08:45.392 02:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:45.960 02:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:45.960 02:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:45.960 02:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:45.960 02:12:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:45.960 [2024-11-08 02:12:47.655624] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:45.960 [2024-11-08 02:12:47.655857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74171 ] 00:08:45.960 { 00:08:45.960 "subsystems": [ 00:08:45.960 { 00:08:45.960 "subsystem": "bdev", 00:08:45.960 "config": [ 00:08:45.960 { 00:08:45.960 "params": { 00:08:45.960 "block_size": 512, 00:08:45.960 "num_blocks": 1048576, 00:08:45.960 "name": "malloc0" 00:08:45.960 }, 00:08:45.960 "method": "bdev_malloc_create" 00:08:45.960 }, 00:08:45.960 { 00:08:45.960 "params": { 00:08:45.960 "filename": "/dev/zram1", 00:08:45.960 "name": "uring0" 00:08:45.960 }, 00:08:45.960 "method": "bdev_uring_create" 00:08:45.960 }, 00:08:45.960 { 00:08:45.960 "method": "bdev_wait_for_examine" 00:08:45.960 } 00:08:45.960 ] 00:08:45.960 } 00:08:45.960 ] 00:08:45.960 } 00:08:45.960 [2024-11-08 02:12:47.789415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.960 [2024-11-08 02:12:47.824364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.218 [2024-11-08 02:12:47.853695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.152  [2024-11-08T02:12:49.972Z] Copying: 177/512 [MB] (177 MBps) [2024-11-08T02:12:51.351Z] Copying: 345/512 [MB] (168 MBps) [2024-11-08T02:12:51.351Z] Copying: 512/512 [MB] (average 173 MBps) 00:08:49.467 00:08:49.467 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:49.467 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:49.467 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:49.467 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:49.467 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:49.467 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:49.467 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:49.467 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:49.467 [2024-11-08 02:12:51.186952] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:49.467 [2024-11-08 02:12:51.187262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74222 ] 00:08:49.467 { 00:08:49.467 "subsystems": [ 00:08:49.467 { 00:08:49.467 "subsystem": "bdev", 00:08:49.467 "config": [ 00:08:49.467 { 00:08:49.467 "params": { 00:08:49.467 "block_size": 512, 00:08:49.467 "num_blocks": 1048576, 00:08:49.467 "name": "malloc0" 00:08:49.467 }, 00:08:49.467 "method": "bdev_malloc_create" 00:08:49.467 }, 00:08:49.467 { 00:08:49.467 "params": { 00:08:49.467 "filename": "/dev/zram1", 00:08:49.467 "name": "uring0" 00:08:49.467 }, 00:08:49.467 "method": "bdev_uring_create" 00:08:49.467 }, 00:08:49.467 { 00:08:49.467 "params": { 00:08:49.467 "name": "uring0" 00:08:49.467 }, 00:08:49.467 "method": "bdev_uring_delete" 00:08:49.467 }, 00:08:49.467 { 00:08:49.467 "method": "bdev_wait_for_examine" 00:08:49.467 } 00:08:49.467 ] 00:08:49.467 } 00:08:49.467 ] 00:08:49.467 } 00:08:49.467 [2024-11-08 02:12:51.325050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.726 [2024-11-08 02:12:51.358028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.726 [2024-11-08 02:12:51.385535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.726  [2024-11-08T02:12:51.870Z] Copying: 0/0 [B] (average 0 Bps) 00:08:49.986 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:49.986 02:12:51 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:49.986 [2024-11-08 02:12:51.803419] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:49.986 [2024-11-08 02:12:51.803546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74245 ] 00:08:49.986 { 00:08:49.986 "subsystems": [ 00:08:49.986 { 00:08:49.986 "subsystem": "bdev", 00:08:49.986 "config": [ 00:08:49.986 { 00:08:49.986 "params": { 00:08:49.986 "block_size": 512, 00:08:49.986 "num_blocks": 1048576, 00:08:49.986 "name": "malloc0" 00:08:49.986 }, 00:08:49.986 "method": "bdev_malloc_create" 00:08:49.986 }, 00:08:49.986 { 00:08:49.986 "params": { 00:08:49.986 "filename": "/dev/zram1", 00:08:49.986 "name": "uring0" 00:08:49.986 }, 00:08:49.986 "method": "bdev_uring_create" 00:08:49.986 }, 00:08:49.986 { 00:08:49.986 "params": { 00:08:49.986 "name": "uring0" 00:08:49.986 }, 00:08:49.986 "method": "bdev_uring_delete" 00:08:49.986 }, 00:08:49.986 { 00:08:49.986 "method": "bdev_wait_for_examine" 00:08:49.986 } 00:08:49.986 ] 00:08:49.986 } 00:08:49.986 ] 00:08:49.986 } 00:08:50.245 [2024-11-08 02:12:51.942119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.245 [2024-11-08 02:12:51.973223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.245 [2024-11-08 02:12:51.999571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.245 [2024-11-08 02:12:52.121810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:50.245 [2024-11-08 02:12:52.121856] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:50.245 [2024-11-08 02:12:52.121866] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:50.245 [2024-11-08 02:12:52.121874] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.503 [2024-11-08 02:12:52.290442] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:50.503 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:08:50.503 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:50.503 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:08:50.503 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:08:50.503 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:08:50.503 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:50.503 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:50.503 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:50.503 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:50.503 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:50.762 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:50.762 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:50.762 ************************************ 00:08:50.762 END TEST dd_uring_copy 00:08:50.762 ************************************ 00:08:50.762 00:08:50.762 real 0m12.113s 00:08:50.762 user 0m8.317s 00:08:50.762 sys 0m10.397s 00:08:50.762 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.762 02:12:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:50.762 00:08:50.762 real 0m12.356s 00:08:50.762 user 0m8.449s 00:08:50.762 sys 0m10.505s 00:08:50.762 02:12:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.762 02:12:52 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:50.762 ************************************ 00:08:50.762 END TEST spdk_dd_uring 00:08:50.762 ************************************ 00:08:50.762 02:12:52 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:50.762 02:12:52 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:50.762 02:12:52 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.762 02:12:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:50.762 ************************************ 00:08:50.762 START TEST spdk_dd_sparse 00:08:50.762 ************************************ 00:08:50.762 02:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:50.763 * Looking for test storage... 00:08:50.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:50.763 02:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:50.763 02:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:08:50.763 02:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:51.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.022 --rc genhtml_branch_coverage=1 00:08:51.022 --rc genhtml_function_coverage=1 00:08:51.022 --rc genhtml_legend=1 00:08:51.022 --rc geninfo_all_blocks=1 00:08:51.022 --rc geninfo_unexecuted_blocks=1 00:08:51.022 00:08:51.022 ' 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:51.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.022 --rc genhtml_branch_coverage=1 00:08:51.022 --rc genhtml_function_coverage=1 00:08:51.022 --rc genhtml_legend=1 00:08:51.022 --rc geninfo_all_blocks=1 00:08:51.022 --rc geninfo_unexecuted_blocks=1 00:08:51.022 00:08:51.022 ' 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:51.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.022 --rc genhtml_branch_coverage=1 00:08:51.022 --rc genhtml_function_coverage=1 00:08:51.022 --rc genhtml_legend=1 00:08:51.022 --rc geninfo_all_blocks=1 00:08:51.022 --rc geninfo_unexecuted_blocks=1 00:08:51.022 00:08:51.022 ' 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:51.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.022 --rc genhtml_branch_coverage=1 00:08:51.022 --rc genhtml_function_coverage=1 00:08:51.022 --rc genhtml_legend=1 00:08:51.022 --rc geninfo_all_blocks=1 00:08:51.022 --rc geninfo_unexecuted_blocks=1 00:08:51.022 00:08:51.022 ' 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.022 02:12:52 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:51.023 1+0 records in 00:08:51.023 1+0 records out 00:08:51.023 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00679875 s, 617 MB/s 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:51.023 1+0 records in 00:08:51.023 1+0 records out 00:08:51.023 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0049732 s, 843 MB/s 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:51.023 1+0 records in 00:08:51.023 1+0 records out 00:08:51.023 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00378312 s, 1.1 GB/s 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:51.023 ************************************ 00:08:51.023 START TEST dd_sparse_file_to_file 00:08:51.023 ************************************ 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:51.023 02:12:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:51.023 [2024-11-08 02:12:52.783692] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:51.023 [2024-11-08 02:12:52.783794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74343 ] 00:08:51.023 { 00:08:51.023 "subsystems": [ 00:08:51.023 { 00:08:51.023 "subsystem": "bdev", 00:08:51.023 "config": [ 00:08:51.023 { 00:08:51.023 "params": { 00:08:51.023 "block_size": 4096, 00:08:51.023 "filename": "dd_sparse_aio_disk", 00:08:51.023 "name": "dd_aio" 00:08:51.023 }, 00:08:51.023 "method": "bdev_aio_create" 00:08:51.023 }, 00:08:51.023 { 00:08:51.023 "params": { 00:08:51.023 "lvs_name": "dd_lvstore", 00:08:51.023 "bdev_name": "dd_aio" 00:08:51.023 }, 00:08:51.023 "method": "bdev_lvol_create_lvstore" 00:08:51.023 }, 00:08:51.023 { 00:08:51.023 "method": "bdev_wait_for_examine" 00:08:51.023 } 00:08:51.023 ] 00:08:51.023 } 00:08:51.023 ] 00:08:51.023 } 00:08:51.282 [2024-11-08 02:12:52.922929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.282 [2024-11-08 02:12:52.954116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.282 [2024-11-08 02:12:52.980287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.282  [2024-11-08T02:12:53.425Z] Copying: 12/36 [MB] (average 1000 MBps) 00:08:51.541 00:08:51.541 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:51.541 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:51.541 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:51.541 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:51.542 00:08:51.542 real 0m0.503s 00:08:51.542 user 0m0.297s 00:08:51.542 sys 0m0.251s 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:51.542 ************************************ 00:08:51.542 END TEST dd_sparse_file_to_file 00:08:51.542 ************************************ 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:51.542 ************************************ 00:08:51.542 START TEST dd_sparse_file_to_bdev 00:08:51.542 ************************************ 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:51.542 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:51.542 [2024-11-08 02:12:53.339644] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:51.542 [2024-11-08 02:12:53.339755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74386 ] 00:08:51.542 { 00:08:51.542 "subsystems": [ 00:08:51.542 { 00:08:51.542 "subsystem": "bdev", 00:08:51.542 "config": [ 00:08:51.542 { 00:08:51.542 "params": { 00:08:51.542 "block_size": 4096, 00:08:51.542 "filename": "dd_sparse_aio_disk", 00:08:51.542 "name": "dd_aio" 00:08:51.542 }, 00:08:51.542 "method": "bdev_aio_create" 00:08:51.542 }, 00:08:51.542 { 00:08:51.542 "params": { 00:08:51.542 "lvs_name": "dd_lvstore", 00:08:51.542 "lvol_name": "dd_lvol", 00:08:51.542 "size_in_mib": 36, 00:08:51.542 "thin_provision": true 00:08:51.542 }, 00:08:51.542 "method": "bdev_lvol_create" 00:08:51.542 }, 00:08:51.542 { 00:08:51.542 "method": "bdev_wait_for_examine" 00:08:51.542 } 00:08:51.542 ] 00:08:51.542 } 00:08:51.542 ] 00:08:51.542 } 00:08:51.801 [2024-11-08 02:12:53.479696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.801 [2024-11-08 02:12:53.517811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.801 [2024-11-08 02:12:53.549866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.801  [2024-11-08T02:12:53.945Z] Copying: 12/36 [MB] (average 545 MBps) 00:08:52.061 00:08:52.061 00:08:52.061 real 0m0.533s 00:08:52.061 user 0m0.348s 00:08:52.061 sys 0m0.248s 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:52.061 ************************************ 00:08:52.061 END TEST dd_sparse_file_to_bdev 00:08:52.061 ************************************ 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:52.061 ************************************ 00:08:52.061 START TEST dd_sparse_bdev_to_file 00:08:52.061 ************************************ 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:52.061 02:12:53 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:52.061 [2024-11-08 02:12:53.928518] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:52.061 [2024-11-08 02:12:53.928615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74418 ] 00:08:52.061 { 00:08:52.061 "subsystems": [ 00:08:52.061 { 00:08:52.061 "subsystem": "bdev", 00:08:52.061 "config": [ 00:08:52.061 { 00:08:52.061 "params": { 00:08:52.061 "block_size": 4096, 00:08:52.061 "filename": "dd_sparse_aio_disk", 00:08:52.061 "name": "dd_aio" 00:08:52.061 }, 00:08:52.061 "method": "bdev_aio_create" 00:08:52.061 }, 00:08:52.061 { 00:08:52.061 "method": "bdev_wait_for_examine" 00:08:52.061 } 00:08:52.061 ] 00:08:52.061 } 00:08:52.061 ] 00:08:52.061 } 00:08:52.320 [2024-11-08 02:12:54.068455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.320 [2024-11-08 02:12:54.100729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.320 [2024-11-08 02:12:54.131516] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.320  [2024-11-08T02:12:54.464Z] Copying: 12/36 [MB] (average 1200 MBps) 00:08:52.580 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:52.580 00:08:52.580 real 0m0.508s 00:08:52.580 user 0m0.320s 00:08:52.580 sys 0m0.233s 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:52.580 ************************************ 00:08:52.580 END TEST dd_sparse_bdev_to_file 00:08:52.580 ************************************ 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:52.580 00:08:52.580 real 0m1.944s 00:08:52.580 user 0m1.148s 00:08:52.580 sys 0m0.928s 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.580 02:12:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:52.580 ************************************ 00:08:52.580 END TEST spdk_dd_sparse 00:08:52.580 ************************************ 00:08:52.840 02:12:54 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:52.840 02:12:54 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.840 02:12:54 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.840 02:12:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:52.840 ************************************ 00:08:52.840 START TEST spdk_dd_negative 00:08:52.840 ************************************ 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:52.840 * Looking for test storage... 00:08:52.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:52.840 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:52.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.841 --rc genhtml_branch_coverage=1 00:08:52.841 --rc genhtml_function_coverage=1 00:08:52.841 --rc genhtml_legend=1 00:08:52.841 --rc geninfo_all_blocks=1 00:08:52.841 --rc geninfo_unexecuted_blocks=1 00:08:52.841 00:08:52.841 ' 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:52.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.841 --rc genhtml_branch_coverage=1 00:08:52.841 --rc genhtml_function_coverage=1 00:08:52.841 --rc genhtml_legend=1 00:08:52.841 --rc geninfo_all_blocks=1 00:08:52.841 --rc geninfo_unexecuted_blocks=1 00:08:52.841 00:08:52.841 ' 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:52.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.841 --rc genhtml_branch_coverage=1 00:08:52.841 --rc genhtml_function_coverage=1 00:08:52.841 --rc genhtml_legend=1 00:08:52.841 --rc geninfo_all_blocks=1 00:08:52.841 --rc geninfo_unexecuted_blocks=1 00:08:52.841 00:08:52.841 ' 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:52.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.841 --rc genhtml_branch_coverage=1 00:08:52.841 --rc genhtml_function_coverage=1 00:08:52.841 --rc genhtml_legend=1 00:08:52.841 --rc geninfo_all_blocks=1 00:08:52.841 --rc geninfo_unexecuted_blocks=1 00:08:52.841 00:08:52.841 ' 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:52.841 ************************************ 00:08:52.841 START TEST dd_invalid_arguments 00:08:52.841 ************************************ 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:52.841 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:53.101 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:53.101 00:08:53.101 CPU options: 00:08:53.101 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:53.101 (like [0,1,10]) 00:08:53.101 --lcores lcore to CPU mapping list. The list is in the format: 00:08:53.101 [<,lcores[@CPUs]>...] 00:08:53.101 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:53.101 Within the group, '-' is used for range separator, 00:08:53.101 ',' is used for single number separator. 00:08:53.101 '( )' can be omitted for single element group, 00:08:53.101 '@' can be omitted if cpus and lcores have the same value 00:08:53.101 --disable-cpumask-locks Disable CPU core lock files. 00:08:53.101 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:53.101 pollers in the app support interrupt mode) 00:08:53.101 -p, --main-core main (primary) core for DPDK 00:08:53.101 00:08:53.101 Configuration options: 00:08:53.101 -c, --config, --json JSON config file 00:08:53.101 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:53.101 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:53.101 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:53.101 --rpcs-allowed comma-separated list of permitted RPCS 00:08:53.101 --json-ignore-init-errors don't exit on invalid config entry 00:08:53.101 00:08:53.101 Memory options: 00:08:53.101 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:53.101 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:53.101 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:53.101 -R, --huge-unlink unlink huge files after initialization 00:08:53.101 -n, --mem-channels number of memory channels used for DPDK 00:08:53.101 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:53.101 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:53.101 --no-huge run without using hugepages 00:08:53.101 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:53.101 -i, --shm-id shared memory ID (optional) 00:08:53.101 -g, --single-file-segments force creating just one hugetlbfs file 00:08:53.101 00:08:53.101 PCI options: 00:08:53.101 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:53.101 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:53.101 -u, --no-pci disable PCI access 00:08:53.101 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:53.101 00:08:53.101 Log options: 00:08:53.101 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:53.101 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:53.101 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:53.101 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:53.101 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:08:53.101 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:08:53.101 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:08:53.101 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:08:53.101 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:53.101 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:08:53.101 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:08:53.101 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:08:53.101 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:53.101 --silence-noticelog disable notice level logging to stderr 00:08:53.101 00:08:53.101 Trace options: 00:08:53.101 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:53.101 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:53.101 [2024-11-08 02:12:54.755086] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:53.101 setting 0 to disable trace (default 32768) 00:08:53.101 Tracepoints vary in size and can use more than one trace entry. 00:08:53.101 -e, --tpoint-group [:] 00:08:53.101 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:08:53.101 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:08:53.101 blob, bdev_raid, all). 00:08:53.101 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:53.102 a tracepoint group. First tpoint inside a group can be enabled by 00:08:53.102 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:53.102 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:53.102 in /include/spdk_internal/trace_defs.h 00:08:53.102 00:08:53.102 Other options: 00:08:53.102 -h, --help show this usage 00:08:53.102 -v, --version print SPDK version 00:08:53.102 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:53.102 --env-context Opaque context for use of the env implementation 00:08:53.102 00:08:53.102 Application specific: 00:08:53.102 [--------- DD Options ---------] 00:08:53.102 --if Input file. Must specify either --if or --ib. 00:08:53.102 --ib Input bdev. Must specifier either --if or --ib 00:08:53.102 --of Output file. Must specify either --of or --ob. 00:08:53.102 --ob Output bdev. Must specify either --of or --ob. 00:08:53.102 --iflag Input file flags. 00:08:53.102 --oflag Output file flags. 00:08:53.102 --bs I/O unit size (default: 4096) 00:08:53.102 --qd Queue depth (default: 2) 00:08:53.102 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:53.102 --skip Skip this many I/O units at start of input. (default: 0) 00:08:53.102 --seek Skip this many I/O units at start of output. (default: 0) 00:08:53.102 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:53.102 --sparse Enable hole skipping in input target 00:08:53.102 Available iflag and oflag values: 00:08:53.102 append - append mode 00:08:53.102 direct - use direct I/O for data 00:08:53.102 directory - fail unless a directory 00:08:53.102 dsync - use synchronized I/O for data 00:08:53.102 noatime - do not update access time 00:08:53.102 noctty - do not assign controlling terminal from file 00:08:53.102 nofollow - do not follow symlinks 00:08:53.102 nonblock - use non-blocking I/O 00:08:53.102 sync - use synchronized I/O for data and metadata 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.102 00:08:53.102 real 0m0.078s 00:08:53.102 user 0m0.040s 00:08:53.102 sys 0m0.037s 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.102 ************************************ 00:08:53.102 END TEST dd_invalid_arguments 00:08:53.102 ************************************ 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:53.102 ************************************ 00:08:53.102 START TEST dd_double_input 00:08:53.102 ************************************ 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:53.102 [2024-11-08 02:12:54.878653] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.102 00:08:53.102 real 0m0.080s 00:08:53.102 user 0m0.044s 00:08:53.102 sys 0m0.034s 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.102 ************************************ 00:08:53.102 END TEST dd_double_input 00:08:53.102 ************************************ 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:53.102 ************************************ 00:08:53.102 START TEST dd_double_output 00:08:53.102 ************************************ 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:53.102 02:12:54 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:53.362 [2024-11-08 02:12:55.007375] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.362 00:08:53.362 real 0m0.077s 00:08:53.362 user 0m0.052s 00:08:53.362 sys 0m0.024s 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:53.362 ************************************ 00:08:53.362 END TEST dd_double_output 00:08:53.362 ************************************ 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:53.362 ************************************ 00:08:53.362 START TEST dd_no_input 00:08:53.362 ************************************ 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:53.362 [2024-11-08 02:12:55.137021] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.362 00:08:53.362 real 0m0.078s 00:08:53.362 user 0m0.051s 00:08:53.362 sys 0m0.026s 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:53.362 ************************************ 00:08:53.362 END TEST dd_no_input 00:08:53.362 ************************************ 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:53.362 ************************************ 00:08:53.362 START TEST dd_no_output 00:08:53.362 ************************************ 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:53.362 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:53.622 [2024-11-08 02:12:55.266985] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.622 00:08:53.622 real 0m0.079s 00:08:53.622 user 0m0.050s 00:08:53.622 sys 0m0.027s 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:53.622 ************************************ 00:08:53.622 END TEST dd_no_output 00:08:53.622 ************************************ 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:53.622 ************************************ 00:08:53.622 START TEST dd_wrong_blocksize 00:08:53.622 ************************************ 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:53.622 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:53.623 [2024-11-08 02:12:55.400011] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.623 00:08:53.623 real 0m0.080s 00:08:53.623 user 0m0.053s 00:08:53.623 sys 0m0.026s 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:53.623 ************************************ 00:08:53.623 END TEST dd_wrong_blocksize 00:08:53.623 ************************************ 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:53.623 ************************************ 00:08:53.623 START TEST dd_smaller_blocksize 00:08:53.623 ************************************ 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:53.623 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:53.882 [2024-11-08 02:12:55.534407] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:53.882 [2024-11-08 02:12:55.534512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74648 ] 00:08:53.882 [2024-11-08 02:12:55.675230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.882 [2024-11-08 02:12:55.715772] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.882 [2024-11-08 02:12:55.747469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.141 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:54.141 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:54.141 [2024-11-08 02:12:55.764990] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:54.141 [2024-11-08 02:12:55.765031] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.142 [2024-11-08 02:12:55.829394] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:54.142 00:08:54.142 real 0m0.432s 00:08:54.142 user 0m0.218s 00:08:54.142 sys 0m0.109s 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:54.142 ************************************ 00:08:54.142 END TEST dd_smaller_blocksize 00:08:54.142 ************************************ 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:54.142 ************************************ 00:08:54.142 START TEST dd_invalid_count 00:08:54.142 ************************************ 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:54.142 02:12:55 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:54.142 [2024-11-08 02:12:56.016479] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:54.402 00:08:54.402 real 0m0.077s 00:08:54.402 user 0m0.049s 00:08:54.402 sys 0m0.027s 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:54.402 ************************************ 00:08:54.402 END TEST dd_invalid_count 00:08:54.402 ************************************ 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:54.402 ************************************ 00:08:54.402 START TEST dd_invalid_oflag 00:08:54.402 ************************************ 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:54.402 [2024-11-08 02:12:56.144655] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:54.402 00:08:54.402 real 0m0.080s 00:08:54.402 user 0m0.045s 00:08:54.402 sys 0m0.033s 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:54.402 ************************************ 00:08:54.402 END TEST dd_invalid_oflag 00:08:54.402 ************************************ 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:54.402 ************************************ 00:08:54.402 START TEST dd_invalid_iflag 00:08:54.402 ************************************ 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:54.402 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:54.402 [2024-11-08 02:12:56.277565] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:54.662 00:08:54.662 real 0m0.082s 00:08:54.662 user 0m0.053s 00:08:54.662 sys 0m0.027s 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.662 ************************************ 00:08:54.662 END TEST dd_invalid_iflag 00:08:54.662 ************************************ 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:54.662 ************************************ 00:08:54.662 START TEST dd_unknown_flag 00:08:54.662 ************************************ 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:54.662 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:54.662 [2024-11-08 02:12:56.407741] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:54.662 [2024-11-08 02:12:56.407839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74740 ] 00:08:54.921 [2024-11-08 02:12:56.550320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.921 [2024-11-08 02:12:56.593262] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.921 [2024-11-08 02:12:56.626462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.921 [2024-11-08 02:12:56.644518] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:54.921 [2024-11-08 02:12:56.644598] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.921 [2024-11-08 02:12:56.644670] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:54.921 [2024-11-08 02:12:56.644686] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.921 [2024-11-08 02:12:56.644953] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:54.921 [2024-11-08 02:12:56.644972] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.921 [2024-11-08 02:12:56.645021] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:54.921 [2024-11-08 02:12:56.645033] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:54.921 [2024-11-08 02:12:56.712196] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.181 00:08:55.181 real 0m0.461s 00:08:55.181 user 0m0.251s 00:08:55.181 sys 0m0.118s 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.181 ************************************ 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:55.181 END TEST dd_unknown_flag 00:08:55.181 ************************************ 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:55.181 ************************************ 00:08:55.181 START TEST dd_invalid_json 00:08:55.181 ************************************ 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:55.181 02:12:56 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:55.181 [2024-11-08 02:12:56.927189] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:55.181 [2024-11-08 02:12:56.927302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74767 ] 00:08:55.440 [2024-11-08 02:12:57.068502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.440 [2024-11-08 02:12:57.111706] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.441 [2024-11-08 02:12:57.111812] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:55.441 [2024-11-08 02:12:57.111828] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:55.441 [2024-11-08 02:12:57.111840] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.441 [2024-11-08 02:12:57.111883] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.441 00:08:55.441 real 0m0.323s 00:08:55.441 user 0m0.157s 00:08:55.441 sys 0m0.065s 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.441 ************************************ 00:08:55.441 END TEST dd_invalid_json 00:08:55.441 ************************************ 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:55.441 ************************************ 00:08:55.441 START TEST dd_invalid_seek 00:08:55.441 ************************************ 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:55.441 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:55.441 { 00:08:55.441 "subsystems": [ 00:08:55.441 { 00:08:55.441 "subsystem": "bdev", 00:08:55.441 "config": [ 00:08:55.441 { 00:08:55.441 "params": { 00:08:55.441 "block_size": 512, 00:08:55.441 "num_blocks": 512, 00:08:55.441 "name": "malloc0" 00:08:55.441 }, 00:08:55.441 "method": "bdev_malloc_create" 00:08:55.441 }, 00:08:55.441 { 00:08:55.441 "params": { 00:08:55.441 "block_size": 512, 00:08:55.441 "num_blocks": 512, 00:08:55.441 "name": "malloc1" 00:08:55.441 }, 00:08:55.441 "method": "bdev_malloc_create" 00:08:55.441 }, 00:08:55.441 { 00:08:55.441 "method": "bdev_wait_for_examine" 00:08:55.441 } 00:08:55.441 ] 00:08:55.441 } 00:08:55.441 ] 00:08:55.441 } 00:08:55.441 [2024-11-08 02:12:57.307058] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:55.441 [2024-11-08 02:12:57.307190] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74798 ] 00:08:55.700 [2024-11-08 02:12:57.449847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.700 [2024-11-08 02:12:57.489722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.700 [2024-11-08 02:12:57.522119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.700 [2024-11-08 02:12:57.565983] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:55.700 [2024-11-08 02:12:57.566052] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.959 [2024-11-08 02:12:57.632427] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.959 00:08:55.959 real 0m0.475s 00:08:55.959 user 0m0.312s 00:08:55.959 sys 0m0.123s 00:08:55.959 ************************************ 00:08:55.959 END TEST dd_invalid_seek 00:08:55.959 ************************************ 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:55.959 ************************************ 00:08:55.959 START TEST dd_invalid_skip 00:08:55.959 ************************************ 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:55.959 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:55.960 02:12:57 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:55.960 { 00:08:55.960 "subsystems": [ 00:08:55.960 { 00:08:55.960 "subsystem": "bdev", 00:08:55.960 "config": [ 00:08:55.960 { 00:08:55.960 "params": { 00:08:55.960 "block_size": 512, 00:08:55.960 "num_blocks": 512, 00:08:55.960 "name": "malloc0" 00:08:55.960 }, 00:08:55.960 "method": "bdev_malloc_create" 00:08:55.960 }, 00:08:55.960 { 00:08:55.960 "params": { 00:08:55.960 "block_size": 512, 00:08:55.960 "num_blocks": 512, 00:08:55.960 "name": "malloc1" 00:08:55.960 }, 00:08:55.960 "method": "bdev_malloc_create" 00:08:55.960 }, 00:08:55.960 { 00:08:55.960 "method": "bdev_wait_for_examine" 00:08:55.960 } 00:08:55.960 ] 00:08:55.960 } 00:08:55.960 ] 00:08:55.960 } 00:08:55.960 [2024-11-08 02:12:57.832504] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:55.960 [2024-11-08 02:12:57.832642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74826 ] 00:08:56.219 [2024-11-08 02:12:57.969808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.219 [2024-11-08 02:12:58.003293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.219 [2024-11-08 02:12:58.032837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.219 [2024-11-08 02:12:58.073268] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:56.219 [2024-11-08 02:12:58.073360] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:56.478 [2024-11-08 02:12:58.130569] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:56.478 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:08:56.478 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:56.478 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:08:56.478 ************************************ 00:08:56.478 END TEST dd_invalid_skip 00:08:56.478 ************************************ 00:08:56.478 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:08:56.478 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:08:56.478 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:56.478 00:08:56.478 real 0m0.429s 00:08:56.478 user 0m0.262s 00:08:56.478 sys 0m0.124s 00:08:56.478 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:56.479 ************************************ 00:08:56.479 START TEST dd_invalid_input_count 00:08:56.479 ************************************ 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:56.479 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:56.479 [2024-11-08 02:12:58.311182] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:56.479 [2024-11-08 02:12:58.311284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74865 ] 00:08:56.479 { 00:08:56.479 "subsystems": [ 00:08:56.479 { 00:08:56.479 "subsystem": "bdev", 00:08:56.479 "config": [ 00:08:56.479 { 00:08:56.479 "params": { 00:08:56.479 "block_size": 512, 00:08:56.479 "num_blocks": 512, 00:08:56.479 "name": "malloc0" 00:08:56.479 }, 00:08:56.479 "method": "bdev_malloc_create" 00:08:56.479 }, 00:08:56.479 { 00:08:56.479 "params": { 00:08:56.479 "block_size": 512, 00:08:56.479 "num_blocks": 512, 00:08:56.479 "name": "malloc1" 00:08:56.479 }, 00:08:56.479 "method": "bdev_malloc_create" 00:08:56.479 }, 00:08:56.479 { 00:08:56.479 "method": "bdev_wait_for_examine" 00:08:56.479 } 00:08:56.479 ] 00:08:56.479 } 00:08:56.479 ] 00:08:56.479 } 00:08:56.738 [2024-11-08 02:12:58.448411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.738 [2024-11-08 02:12:58.480676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.738 [2024-11-08 02:12:58.508160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.738 [2024-11-08 02:12:58.548348] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:56.738 [2024-11-08 02:12:58.548436] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:56.738 [2024-11-08 02:12:58.605575] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:56.998 00:08:56.998 real 0m0.421s 00:08:56.998 user 0m0.272s 00:08:56.998 sys 0m0.109s 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.998 ************************************ 00:08:56.998 END TEST dd_invalid_input_count 00:08:56.998 ************************************ 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:56.998 ************************************ 00:08:56.998 START TEST dd_invalid_output_count 00:08:56.998 ************************************ 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:56.998 02:12:58 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:56.998 { 00:08:56.998 "subsystems": [ 00:08:56.998 { 00:08:56.998 "subsystem": "bdev", 00:08:56.998 "config": [ 00:08:56.998 { 00:08:56.998 "params": { 00:08:56.998 "block_size": 512, 00:08:56.998 "num_blocks": 512, 00:08:56.998 "name": "malloc0" 00:08:56.998 }, 00:08:56.998 "method": "bdev_malloc_create" 00:08:56.998 }, 00:08:56.998 { 00:08:56.998 "method": "bdev_wait_for_examine" 00:08:56.998 } 00:08:56.998 ] 00:08:56.998 } 00:08:56.998 ] 00:08:56.998 } 00:08:56.998 [2024-11-08 02:12:58.785548] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:56.998 [2024-11-08 02:12:58.785646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74893 ] 00:08:57.258 [2024-11-08 02:12:58.920946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.258 [2024-11-08 02:12:58.952180] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.258 [2024-11-08 02:12:58.981207] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.258 [2024-11-08 02:12:59.014674] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:57.258 [2024-11-08 02:12:59.014755] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:57.258 [2024-11-08 02:12:59.074583] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:57.258 02:12:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:08:57.258 02:12:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:57.258 02:12:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:08:57.258 02:12:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:57.258 02:12:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:08:57.258 02:12:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:57.258 00:08:57.258 real 0m0.415s 00:08:57.258 user 0m0.252s 00:08:57.258 sys 0m0.113s 00:08:57.258 02:12:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.258 ************************************ 00:08:57.258 END TEST dd_invalid_output_count 00:08:57.258 ************************************ 00:08:57.258 02:12:59 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:57.517 ************************************ 00:08:57.517 START TEST dd_bs_not_multiple 00:08:57.517 ************************************ 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.517 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.518 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.518 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.518 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:57.518 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:57.518 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:57.518 [2024-11-08 02:12:59.252561] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:57.518 [2024-11-08 02:12:59.252665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74924 ] 00:08:57.518 { 00:08:57.518 "subsystems": [ 00:08:57.518 { 00:08:57.518 "subsystem": "bdev", 00:08:57.518 "config": [ 00:08:57.518 { 00:08:57.518 "params": { 00:08:57.518 "block_size": 512, 00:08:57.518 "num_blocks": 512, 00:08:57.518 "name": "malloc0" 00:08:57.518 }, 00:08:57.518 "method": "bdev_malloc_create" 00:08:57.518 }, 00:08:57.518 { 00:08:57.518 "params": { 00:08:57.518 "block_size": 512, 00:08:57.518 "num_blocks": 512, 00:08:57.518 "name": "malloc1" 00:08:57.518 }, 00:08:57.518 "method": "bdev_malloc_create" 00:08:57.518 }, 00:08:57.518 { 00:08:57.518 "method": "bdev_wait_for_examine" 00:08:57.518 } 00:08:57.518 ] 00:08:57.518 } 00:08:57.518 ] 00:08:57.518 } 00:08:57.518 [2024-11-08 02:12:59.391921] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.777 [2024-11-08 02:12:59.428671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.777 [2024-11-08 02:12:59.456771] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.777 [2024-11-08 02:12:59.498982] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:57.777 [2024-11-08 02:12:59.499073] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:57.777 [2024-11-08 02:12:59.562874] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:57.777 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:08:57.777 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:57.777 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:08:57.777 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:08:57.777 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:08:57.777 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:57.777 00:08:57.777 real 0m0.449s 00:08:57.777 user 0m0.294s 00:08:57.777 sys 0m0.116s 00:08:57.777 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.777 ************************************ 00:08:57.777 END TEST dd_bs_not_multiple 00:08:57.777 ************************************ 00:08:57.777 02:12:59 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:58.036 00:08:58.036 real 0m5.189s 00:08:58.036 user 0m2.884s 00:08:58.036 sys 0m1.715s 00:08:58.036 02:12:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.036 ************************************ 00:08:58.036 END TEST spdk_dd_negative 00:08:58.036 ************************************ 00:08:58.036 02:12:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:58.036 00:08:58.036 real 1m3.547s 00:08:58.036 user 0m40.369s 00:08:58.036 sys 0m26.482s 00:08:58.036 02:12:59 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.036 02:12:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:58.036 ************************************ 00:08:58.036 END TEST spdk_dd 00:08:58.036 ************************************ 00:08:58.036 02:12:59 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:58.036 02:12:59 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:58.036 02:12:59 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:58.036 02:12:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:58.036 02:12:59 -- common/autotest_common.sh@10 -- # set +x 00:08:58.036 02:12:59 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:58.036 02:12:59 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:58.036 02:12:59 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:58.036 02:12:59 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:58.036 02:12:59 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:58.036 02:12:59 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:58.036 02:12:59 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:58.036 02:12:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:58.036 02:12:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.036 02:12:59 -- common/autotest_common.sh@10 -- # set +x 00:08:58.036 ************************************ 00:08:58.036 START TEST nvmf_tcp 00:08:58.036 ************************************ 00:08:58.036 02:12:59 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:58.036 * Looking for test storage... 00:08:58.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:58.036 02:12:59 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:58.036 02:12:59 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:58.036 02:12:59 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:58.295 02:12:59 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.295 02:12:59 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:58.296 02:12:59 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:58.296 02:12:59 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.296 02:12:59 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:58.296 02:12:59 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.296 02:12:59 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.296 02:12:59 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.296 02:12:59 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:58.296 02:12:59 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.296 02:12:59 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:58.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.296 --rc genhtml_branch_coverage=1 00:08:58.296 --rc genhtml_function_coverage=1 00:08:58.296 --rc genhtml_legend=1 00:08:58.296 --rc geninfo_all_blocks=1 00:08:58.296 --rc geninfo_unexecuted_blocks=1 00:08:58.296 00:08:58.296 ' 00:08:58.296 02:12:59 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:58.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.296 --rc genhtml_branch_coverage=1 00:08:58.296 --rc genhtml_function_coverage=1 00:08:58.296 --rc genhtml_legend=1 00:08:58.296 --rc geninfo_all_blocks=1 00:08:58.296 --rc geninfo_unexecuted_blocks=1 00:08:58.296 00:08:58.296 ' 00:08:58.296 02:12:59 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:58.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.296 --rc genhtml_branch_coverage=1 00:08:58.296 --rc genhtml_function_coverage=1 00:08:58.296 --rc genhtml_legend=1 00:08:58.296 --rc geninfo_all_blocks=1 00:08:58.296 --rc geninfo_unexecuted_blocks=1 00:08:58.296 00:08:58.296 ' 00:08:58.296 02:12:59 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:58.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.296 --rc genhtml_branch_coverage=1 00:08:58.296 --rc genhtml_function_coverage=1 00:08:58.296 --rc genhtml_legend=1 00:08:58.296 --rc geninfo_all_blocks=1 00:08:58.296 --rc geninfo_unexecuted_blocks=1 00:08:58.296 00:08:58.296 ' 00:08:58.296 02:13:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:58.296 02:13:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:58.296 02:13:00 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:58.296 02:13:00 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:58.296 02:13:00 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.296 02:13:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:58.296 ************************************ 00:08:58.296 START TEST nvmf_target_core 00:08:58.296 ************************************ 00:08:58.296 02:13:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:58.296 * Looking for test storage... 00:08:58.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:58.296 02:13:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:58.296 02:13:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:08:58.296 02:13:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:58.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.558 --rc genhtml_branch_coverage=1 00:08:58.558 --rc genhtml_function_coverage=1 00:08:58.558 --rc genhtml_legend=1 00:08:58.558 --rc geninfo_all_blocks=1 00:08:58.558 --rc geninfo_unexecuted_blocks=1 00:08:58.558 00:08:58.558 ' 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:58.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.558 --rc genhtml_branch_coverage=1 00:08:58.558 --rc genhtml_function_coverage=1 00:08:58.558 --rc genhtml_legend=1 00:08:58.558 --rc geninfo_all_blocks=1 00:08:58.558 --rc geninfo_unexecuted_blocks=1 00:08:58.558 00:08:58.558 ' 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:58.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.558 --rc genhtml_branch_coverage=1 00:08:58.558 --rc genhtml_function_coverage=1 00:08:58.558 --rc genhtml_legend=1 00:08:58.558 --rc geninfo_all_blocks=1 00:08:58.558 --rc geninfo_unexecuted_blocks=1 00:08:58.558 00:08:58.558 ' 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:58.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.558 --rc genhtml_branch_coverage=1 00:08:58.558 --rc genhtml_function_coverage=1 00:08:58.558 --rc genhtml_legend=1 00:08:58.558 --rc geninfo_all_blocks=1 00:08:58.558 --rc geninfo_unexecuted_blocks=1 00:08:58.558 00:08:58.558 ' 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.558 02:13:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.559 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.559 ************************************ 00:08:58.559 START TEST nvmf_host_management 00:08:58.559 ************************************ 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:58.559 * Looking for test storage... 00:08:58.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:58.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.559 --rc genhtml_branch_coverage=1 00:08:58.559 --rc genhtml_function_coverage=1 00:08:58.559 --rc genhtml_legend=1 00:08:58.559 --rc geninfo_all_blocks=1 00:08:58.559 --rc geninfo_unexecuted_blocks=1 00:08:58.559 00:08:58.559 ' 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:58.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.559 --rc genhtml_branch_coverage=1 00:08:58.559 --rc genhtml_function_coverage=1 00:08:58.559 --rc genhtml_legend=1 00:08:58.559 --rc geninfo_all_blocks=1 00:08:58.559 --rc geninfo_unexecuted_blocks=1 00:08:58.559 00:08:58.559 ' 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:58.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.559 --rc genhtml_branch_coverage=1 00:08:58.559 --rc genhtml_function_coverage=1 00:08:58.559 --rc genhtml_legend=1 00:08:58.559 --rc geninfo_all_blocks=1 00:08:58.559 --rc geninfo_unexecuted_blocks=1 00:08:58.559 00:08:58.559 ' 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:58.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.559 --rc genhtml_branch_coverage=1 00:08:58.559 --rc genhtml_function_coverage=1 00:08:58.559 --rc genhtml_legend=1 00:08:58.559 --rc geninfo_all_blocks=1 00:08:58.559 --rc geninfo_unexecuted_blocks=1 00:08:58.559 00:08:58.559 ' 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.559 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.846 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:58.846 Cannot find device "nvmf_init_br" 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:58.846 Cannot find device "nvmf_init_br2" 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:58.846 Cannot find device "nvmf_tgt_br" 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.846 Cannot find device "nvmf_tgt_br2" 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:58.846 Cannot find device "nvmf_init_br" 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:58.846 Cannot find device "nvmf_init_br2" 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:58.846 Cannot find device "nvmf_tgt_br" 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:58.846 Cannot find device "nvmf_tgt_br2" 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:58.846 Cannot find device "nvmf_br" 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:58.846 Cannot find device "nvmf_init_if" 00:08:58.846 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:58.847 Cannot find device "nvmf_init_if2" 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:58.847 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:59.112 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:59.112 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:08:59.112 00:08:59.112 --- 10.0.0.3 ping statistics --- 00:08:59.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.112 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:59.112 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:59.112 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:08:59.112 00:08:59.112 --- 10.0.0.4 ping statistics --- 00:08:59.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.112 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:59.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:59.112 00:08:59.112 --- 10.0.0.1 ping statistics --- 00:08:59.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.112 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:59.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:08:59.112 00:08:59.112 --- 10.0.0.2 ping statistics --- 00:08:59.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.112 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:59.112 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=75263 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 75263 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 75263 ']' 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.113 02:13:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:59.372 [2024-11-08 02:13:01.044811] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:59.372 [2024-11-08 02:13:01.044920] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.372 [2024-11-08 02:13:01.189178] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.372 [2024-11-08 02:13:01.235641] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.372 [2024-11-08 02:13:01.235706] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.372 [2024-11-08 02:13:01.235720] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.372 [2024-11-08 02:13:01.235730] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.372 [2024-11-08 02:13:01.235739] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.372 [2024-11-08 02:13:01.235898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.372 [2024-11-08 02:13:01.236605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.372 [2024-11-08 02:13:01.236739] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:59.372 [2024-11-08 02:13:01.236748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.631 [2024-11-08 02:13:01.271009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.631 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.631 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:59.631 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:59.631 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:59.632 [2024-11-08 02:13:01.374819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:59.632 Malloc0 00:08:59.632 [2024-11-08 02:13:01.432083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=75315 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 75315 /var/tmp/bdevperf.sock 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 75315 ']' 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:59.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:59.632 { 00:08:59.632 "params": { 00:08:59.632 "name": "Nvme$subsystem", 00:08:59.632 "trtype": "$TEST_TRANSPORT", 00:08:59.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:59.632 "adrfam": "ipv4", 00:08:59.632 "trsvcid": "$NVMF_PORT", 00:08:59.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:59.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:59.632 "hdgst": ${hdgst:-false}, 00:08:59.632 "ddgst": ${ddgst:-false} 00:08:59.632 }, 00:08:59.632 "method": "bdev_nvme_attach_controller" 00:08:59.632 } 00:08:59.632 EOF 00:08:59.632 )") 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:59.632 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:59.632 "params": { 00:08:59.632 "name": "Nvme0", 00:08:59.632 "trtype": "tcp", 00:08:59.632 "traddr": "10.0.0.3", 00:08:59.632 "adrfam": "ipv4", 00:08:59.632 "trsvcid": "4420", 00:08:59.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:59.632 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:59.632 "hdgst": false, 00:08:59.632 "ddgst": false 00:08:59.632 }, 00:08:59.632 "method": "bdev_nvme_attach_controller" 00:08:59.632 }' 00:08:59.891 [2024-11-08 02:13:01.538642] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:59.892 [2024-11-08 02:13:01.538730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75315 ] 00:08:59.892 [2024-11-08 02:13:01.682266] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.892 [2024-11-08 02:13:01.723258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.892 [2024-11-08 02:13:01.764289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:00.151 Running I/O for 10 seconds... 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.151 02:13:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.151 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:00.151 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:00.151 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:00.410 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:00.410 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:00.410 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:00.410 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:00.411 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.411 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.411 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.671 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:09:00.671 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:09:00.671 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:00.671 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:00.671 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:00.671 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:00.671 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.671 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.671 [2024-11-08 02:13:02.318595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.671 [2024-11-08 02:13:02.318653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.671 [2024-11-08 02:13:02.318676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.671 [2024-11-08 02:13:02.318688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.671 [2024-11-08 02:13:02.318699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.671 [2024-11-08 02:13:02.318708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.671 [2024-11-08 02:13:02.318719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.318984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.318996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.672 [2024-11-08 02:13:02.319477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.672 [2024-11-08 02:13:02.319486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.319983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.319994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.320002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.320013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:00.673 [2024-11-08 02:13:02.320021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:00.673 [2024-11-08 02:13:02.320031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1532460 is same with the state(6) to be set 00:09:00.673 [2024-11-08 02:13:02.320078] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1532460 was disconnected and freed. reset controller. 00:09:00.673 [2024-11-08 02:13:02.321437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:00.673 task offset: 89984 on job bdev=Nvme0n1 fails 00:09:00.673 00:09:00.673 Latency(us) 00:09:00.673 [2024-11-08T02:13:02.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.673 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:00.673 Job: Nvme0n1 ended in about 0.45 seconds with error 00:09:00.673 Verification LBA range: start 0x0 length 0x400 00:09:00.673 Nvme0n1 : 0.45 1411.22 88.20 141.12 0.00 39630.68 5987.61 43372.92 00:09:00.673 [2024-11-08T02:13:02.557Z] =================================================================================================================== 00:09:00.673 [2024-11-08T02:13:02.557Z] Total : 1411.22 88.20 141.12 0.00 39630.68 5987.61 43372.92 00:09:00.673 [2024-11-08 02:13:02.323659] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:00.673 [2024-11-08 02:13:02.323697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c87a0 (9): Bad file descriptor 00:09:00.673 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.673 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:00.673 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.673 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.673 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.673 02:13:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:00.673 [2024-11-08 02:13:02.336173] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:01.610 02:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 75315 00:09:01.610 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (75315) - No such process 00:09:01.610 02:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:01.610 02:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:01.610 02:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:01.610 02:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:01.610 02:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:09:01.610 02:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:09:01.610 02:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:01.610 02:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:01.610 { 00:09:01.610 "params": { 00:09:01.610 "name": "Nvme$subsystem", 00:09:01.610 "trtype": "$TEST_TRANSPORT", 00:09:01.610 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:01.610 "adrfam": "ipv4", 00:09:01.610 "trsvcid": "$NVMF_PORT", 00:09:01.610 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:01.610 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:01.610 "hdgst": ${hdgst:-false}, 00:09:01.610 "ddgst": ${ddgst:-false} 00:09:01.610 }, 00:09:01.610 "method": "bdev_nvme_attach_controller" 00:09:01.610 } 00:09:01.610 EOF 00:09:01.610 )") 00:09:01.610 02:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:09:01.610 02:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:09:01.610 02:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:09:01.610 02:13:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:01.610 "params": { 00:09:01.610 "name": "Nvme0", 00:09:01.610 "trtype": "tcp", 00:09:01.610 "traddr": "10.0.0.3", 00:09:01.610 "adrfam": "ipv4", 00:09:01.610 "trsvcid": "4420", 00:09:01.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:01.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:01.610 "hdgst": false, 00:09:01.610 "ddgst": false 00:09:01.610 }, 00:09:01.610 "method": "bdev_nvme_attach_controller" 00:09:01.610 }' 00:09:01.610 [2024-11-08 02:13:03.388738] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:01.610 [2024-11-08 02:13:03.388816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75355 ] 00:09:01.869 [2024-11-08 02:13:03.527305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.869 [2024-11-08 02:13:03.568674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.869 [2024-11-08 02:13:03.609679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.869 Running I/O for 1 seconds... 00:09:03.245 1600.00 IOPS, 100.00 MiB/s 00:09:03.245 Latency(us) 00:09:03.245 [2024-11-08T02:13:05.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.245 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:03.245 Verification LBA range: start 0x0 length 0x400 00:09:03.245 Nvme0n1 : 1.03 1614.63 100.91 0.00 0.00 38840.93 3485.32 37176.79 00:09:03.245 [2024-11-08T02:13:05.129Z] =================================================================================================================== 00:09:03.245 [2024-11-08T02:13:05.129Z] Total : 1614.63 100.91 0.00 0.00 38840.93 3485.32 37176.79 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:03.245 rmmod nvme_tcp 00:09:03.245 rmmod nvme_fabrics 00:09:03.245 rmmod nvme_keyring 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 75263 ']' 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 75263 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 75263 ']' 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 75263 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:03.245 02:13:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75263 00:09:03.245 killing process with pid 75263 00:09:03.245 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:03.245 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:03.245 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75263' 00:09:03.245 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 75263 00:09:03.246 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 75263 00:09:03.504 [2024-11-08 02:13:05.135402] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:03.504 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:03.762 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:03.763 00:09:03.763 real 0m5.188s 00:09:03.763 user 0m17.892s 00:09:03.763 sys 0m1.392s 00:09:03.763 ************************************ 00:09:03.763 END TEST nvmf_host_management 00:09:03.763 ************************************ 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.763 ************************************ 00:09:03.763 START TEST nvmf_lvol 00:09:03.763 ************************************ 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:03.763 * Looking for test storage... 00:09:03.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:03.763 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.022 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:04.022 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:04.022 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.022 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:04.022 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.022 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.022 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.022 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:04.022 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.022 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:04.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.022 --rc genhtml_branch_coverage=1 00:09:04.022 --rc genhtml_function_coverage=1 00:09:04.022 --rc genhtml_legend=1 00:09:04.022 --rc geninfo_all_blocks=1 00:09:04.022 --rc geninfo_unexecuted_blocks=1 00:09:04.022 00:09:04.022 ' 00:09:04.022 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:04.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.022 --rc genhtml_branch_coverage=1 00:09:04.022 --rc genhtml_function_coverage=1 00:09:04.022 --rc genhtml_legend=1 00:09:04.022 --rc geninfo_all_blocks=1 00:09:04.022 --rc geninfo_unexecuted_blocks=1 00:09:04.022 00:09:04.023 ' 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:04.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.023 --rc genhtml_branch_coverage=1 00:09:04.023 --rc genhtml_function_coverage=1 00:09:04.023 --rc genhtml_legend=1 00:09:04.023 --rc geninfo_all_blocks=1 00:09:04.023 --rc geninfo_unexecuted_blocks=1 00:09:04.023 00:09:04.023 ' 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:04.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.023 --rc genhtml_branch_coverage=1 00:09:04.023 --rc genhtml_function_coverage=1 00:09:04.023 --rc genhtml_legend=1 00:09:04.023 --rc geninfo_all_blocks=1 00:09:04.023 --rc geninfo_unexecuted_blocks=1 00:09:04.023 00:09:04.023 ' 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:04.023 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.023 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:04.024 Cannot find device "nvmf_init_br" 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:04.024 Cannot find device "nvmf_init_br2" 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:04.024 Cannot find device "nvmf_tgt_br" 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:04.024 Cannot find device "nvmf_tgt_br2" 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:04.024 Cannot find device "nvmf_init_br" 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:04.024 Cannot find device "nvmf_init_br2" 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:04.024 Cannot find device "nvmf_tgt_br" 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:04.024 Cannot find device "nvmf_tgt_br2" 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:04.024 Cannot find device "nvmf_br" 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:04.024 Cannot find device "nvmf_init_if" 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:04.024 Cannot find device "nvmf_init_if2" 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:04.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:04.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:04.024 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:04.283 02:13:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:04.283 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:04.283 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:04.283 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:04.283 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:04.283 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:04.283 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:04.283 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:04.283 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:04.283 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:04.283 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.125 ms 00:09:04.283 00:09:04.283 --- 10.0.0.3 ping statistics --- 00:09:04.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.284 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:04.284 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:04.284 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:09:04.284 00:09:04.284 --- 10.0.0.4 ping statistics --- 00:09:04.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.284 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:04.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:04.284 00:09:04.284 --- 10.0.0.1 ping statistics --- 00:09:04.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.284 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:04.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:09:04.284 00:09:04.284 --- 10.0.0.2 ping statistics --- 00:09:04.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.284 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=75620 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 75620 00:09:04.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 75620 ']' 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:04.284 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.284 [2024-11-08 02:13:06.148932] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:04.284 [2024-11-08 02:13:06.149285] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.542 [2024-11-08 02:13:06.295199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:04.542 [2024-11-08 02:13:06.340898] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.542 [2024-11-08 02:13:06.341158] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.542 [2024-11-08 02:13:06.341542] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.542 [2024-11-08 02:13:06.341796] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.542 [2024-11-08 02:13:06.341920] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.542 [2024-11-08 02:13:06.342004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.542 [2024-11-08 02:13:06.342164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.542 [2024-11-08 02:13:06.342196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.542 [2024-11-08 02:13:06.377221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.800 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.800 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:04.800 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:04.800 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:04.800 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.800 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.800 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:05.059 [2024-11-08 02:13:06.753141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.059 02:13:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.317 02:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:05.317 02:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.575 02:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:05.575 02:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:05.833 02:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:06.092 02:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5b470184-7e58-4d47-8144-cb195eb80958 00:09:06.092 02:13:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5b470184-7e58-4d47-8144-cb195eb80958 lvol 20 00:09:06.350 02:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4354ba57-d3d3-4081-9fc8-b24c82665064 00:09:06.350 02:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:06.608 02:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4354ba57-d3d3-4081-9fc8-b24c82665064 00:09:06.866 02:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:07.125 [2024-11-08 02:13:08.965392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:07.125 02:13:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:07.387 02:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:07.387 02:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=75688 00:09:07.387 02:13:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:08.760 02:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 4354ba57-d3d3-4081-9fc8-b24c82665064 MY_SNAPSHOT 00:09:08.760 02:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9e1fbb09-6c04-4869-a328-37169a3c3c5f 00:09:08.760 02:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 4354ba57-d3d3-4081-9fc8-b24c82665064 30 00:09:09.018 02:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 9e1fbb09-6c04-4869-a328-37169a3c3c5f MY_CLONE 00:09:09.276 02:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e2e346a9-1905-4ad6-b67b-7fe67cb2ff30 00:09:09.276 02:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e2e346a9-1905-4ad6-b67b-7fe67cb2ff30 00:09:09.842 02:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 75688 00:09:18.009 Initializing NVMe Controllers 00:09:18.009 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:18.009 Controller IO queue size 128, less than required. 00:09:18.009 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:18.009 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:18.009 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:18.009 Initialization complete. Launching workers. 00:09:18.009 ======================================================== 00:09:18.009 Latency(us) 00:09:18.009 Device Information : IOPS MiB/s Average min max 00:09:18.009 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9548.80 37.30 13406.45 2213.03 56785.96 00:09:18.009 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9617.90 37.57 13317.97 3423.05 106791.35 00:09:18.009 ======================================================== 00:09:18.009 Total : 19166.70 74.87 13362.05 2213.03 106791.35 00:09:18.009 00:09:18.009 02:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:18.009 02:13:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4354ba57-d3d3-4081-9fc8-b24c82665064 00:09:18.268 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5b470184-7e58-4d47-8144-cb195eb80958 00:09:18.526 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:18.526 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:18.526 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:18.526 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:18.526 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.785 rmmod nvme_tcp 00:09:18.785 rmmod nvme_fabrics 00:09:18.785 rmmod nvme_keyring 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 75620 ']' 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 75620 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 75620 ']' 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 75620 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75620 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:18.785 killing process with pid 75620 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75620' 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 75620 00:09:18.785 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 75620 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.044 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.303 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:09:19.303 00:09:19.303 real 0m15.473s 00:09:19.303 user 1m4.181s 00:09:19.303 sys 0m4.067s 00:09:19.303 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.303 02:13:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:19.303 ************************************ 00:09:19.303 END TEST nvmf_lvol 00:09:19.303 ************************************ 00:09:19.303 02:13:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:19.303 02:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:19.303 02:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.303 02:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.303 ************************************ 00:09:19.303 START TEST nvmf_lvs_grow 00:09:19.303 ************************************ 00:09:19.303 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:19.303 * Looking for test storage... 00:09:19.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:19.303 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:19.303 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:09:19.303 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:19.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.304 --rc genhtml_branch_coverage=1 00:09:19.304 --rc genhtml_function_coverage=1 00:09:19.304 --rc genhtml_legend=1 00:09:19.304 --rc geninfo_all_blocks=1 00:09:19.304 --rc geninfo_unexecuted_blocks=1 00:09:19.304 00:09:19.304 ' 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:19.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.304 --rc genhtml_branch_coverage=1 00:09:19.304 --rc genhtml_function_coverage=1 00:09:19.304 --rc genhtml_legend=1 00:09:19.304 --rc geninfo_all_blocks=1 00:09:19.304 --rc geninfo_unexecuted_blocks=1 00:09:19.304 00:09:19.304 ' 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:19.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.304 --rc genhtml_branch_coverage=1 00:09:19.304 --rc genhtml_function_coverage=1 00:09:19.304 --rc genhtml_legend=1 00:09:19.304 --rc geninfo_all_blocks=1 00:09:19.304 --rc geninfo_unexecuted_blocks=1 00:09:19.304 00:09:19.304 ' 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:19.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.304 --rc genhtml_branch_coverage=1 00:09:19.304 --rc genhtml_function_coverage=1 00:09:19.304 --rc genhtml_legend=1 00:09:19.304 --rc geninfo_all_blocks=1 00:09:19.304 --rc geninfo_unexecuted_blocks=1 00:09:19.304 00:09:19.304 ' 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.304 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.563 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:09:19.563 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:09:19.563 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.563 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.563 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:19.563 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.563 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.563 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.563 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.564 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:19.564 Cannot find device "nvmf_init_br" 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:19.564 Cannot find device "nvmf_init_br2" 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:19.564 Cannot find device "nvmf_tgt_br" 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:19.564 Cannot find device "nvmf_tgt_br2" 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:19.564 Cannot find device "nvmf_init_br" 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:19.564 Cannot find device "nvmf_init_br2" 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:19.564 Cannot find device "nvmf_tgt_br" 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:19.564 Cannot find device "nvmf_tgt_br2" 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:19.564 Cannot find device "nvmf_br" 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:19.564 Cannot find device "nvmf_init_if" 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:19.564 Cannot find device "nvmf_init_if2" 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:19.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:19.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:19.564 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:19.565 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:19.565 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:19.565 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:19.565 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:19.565 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:19.565 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:19.565 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:19.565 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:19.824 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:19.824 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:09:19.824 00:09:19.824 --- 10.0.0.3 ping statistics --- 00:09:19.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.824 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:19.824 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:19.824 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:09:19.824 00:09:19.824 --- 10.0.0.4 ping statistics --- 00:09:19.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.824 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:19.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:19.824 00:09:19.824 --- 10.0.0.1 ping statistics --- 00:09:19.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.824 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:19.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:09:19.824 00:09:19.824 --- 10.0.0.2 ping statistics --- 00:09:19.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.824 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=76075 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 76075 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 76075 ']' 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.824 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:19.824 [2024-11-08 02:13:21.649269] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:19.824 [2024-11-08 02:13:21.649878] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.083 [2024-11-08 02:13:21.785036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.083 [2024-11-08 02:13:21.820246] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.083 [2024-11-08 02:13:21.820310] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.083 [2024-11-08 02:13:21.820320] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.083 [2024-11-08 02:13:21.820326] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.083 [2024-11-08 02:13:21.820332] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.083 [2024-11-08 02:13:21.820361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.083 [2024-11-08 02:13:21.848760] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:20.083 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.083 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:20.084 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:20.084 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.084 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:20.084 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.342 02:13:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:20.601 [2024-11-08 02:13:22.272766] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:20.601 ************************************ 00:09:20.601 START TEST lvs_grow_clean 00:09:20.601 ************************************ 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:20.601 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:20.860 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:20.860 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:21.119 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e24894dd-c3a4-4644-9590-edf1dcaef51a 00:09:21.119 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e24894dd-c3a4-4644-9590-edf1dcaef51a 00:09:21.119 02:13:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:21.378 02:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:21.378 02:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:21.378 02:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e24894dd-c3a4-4644-9590-edf1dcaef51a lvol 150 00:09:21.636 02:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=66de7291-8add-4483-9094-ec477dc458e0 00:09:21.636 02:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:21.636 02:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:21.894 [2024-11-08 02:13:23.639038] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:21.894 [2024-11-08 02:13:23.639179] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:21.894 true 00:09:21.894 02:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:21.894 02:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e24894dd-c3a4-4644-9590-edf1dcaef51a 00:09:22.153 02:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:22.153 02:13:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:22.411 02:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 66de7291-8add-4483-9094-ec477dc458e0 00:09:22.670 02:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:22.929 [2024-11-08 02:13:24.679655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:22.929 02:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:23.188 02:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=76156 00:09:23.188 02:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:23.188 02:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:23.188 02:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 76156 /var/tmp/bdevperf.sock 00:09:23.188 02:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 76156 ']' 00:09:23.188 02:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:23.188 02:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:23.188 02:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:23.188 02:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.188 02:13:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:23.188 [2024-11-08 02:13:25.057181] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:23.188 [2024-11-08 02:13:25.057296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76156 ] 00:09:23.448 [2024-11-08 02:13:25.201650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.448 [2024-11-08 02:13:25.244460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.448 [2024-11-08 02:13:25.278635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:23.448 02:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.448 02:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:23.448 02:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:24.015 Nvme0n1 00:09:24.016 02:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:24.275 [ 00:09:24.275 { 00:09:24.275 "name": "Nvme0n1", 00:09:24.275 "aliases": [ 00:09:24.275 "66de7291-8add-4483-9094-ec477dc458e0" 00:09:24.275 ], 00:09:24.275 "product_name": "NVMe disk", 00:09:24.275 "block_size": 4096, 00:09:24.275 "num_blocks": 38912, 00:09:24.275 "uuid": "66de7291-8add-4483-9094-ec477dc458e0", 00:09:24.275 "numa_id": -1, 00:09:24.275 "assigned_rate_limits": { 00:09:24.275 "rw_ios_per_sec": 0, 00:09:24.275 "rw_mbytes_per_sec": 0, 00:09:24.275 "r_mbytes_per_sec": 0, 00:09:24.275 "w_mbytes_per_sec": 0 00:09:24.275 }, 00:09:24.275 "claimed": false, 00:09:24.275 "zoned": false, 00:09:24.275 "supported_io_types": { 00:09:24.275 "read": true, 00:09:24.275 "write": true, 00:09:24.275 "unmap": true, 00:09:24.275 "flush": true, 00:09:24.275 "reset": true, 00:09:24.275 "nvme_admin": true, 00:09:24.275 "nvme_io": true, 00:09:24.275 "nvme_io_md": false, 00:09:24.275 "write_zeroes": true, 00:09:24.275 "zcopy": false, 00:09:24.275 "get_zone_info": false, 00:09:24.275 "zone_management": false, 00:09:24.275 "zone_append": false, 00:09:24.275 "compare": true, 00:09:24.275 "compare_and_write": true, 00:09:24.275 "abort": true, 00:09:24.275 "seek_hole": false, 00:09:24.275 "seek_data": false, 00:09:24.275 "copy": true, 00:09:24.275 "nvme_iov_md": false 00:09:24.275 }, 00:09:24.275 "memory_domains": [ 00:09:24.275 { 00:09:24.275 "dma_device_id": "system", 00:09:24.275 "dma_device_type": 1 00:09:24.275 } 00:09:24.275 ], 00:09:24.275 "driver_specific": { 00:09:24.275 "nvme": [ 00:09:24.275 { 00:09:24.275 "trid": { 00:09:24.275 "trtype": "TCP", 00:09:24.275 "adrfam": "IPv4", 00:09:24.275 "traddr": "10.0.0.3", 00:09:24.275 "trsvcid": "4420", 00:09:24.275 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:24.275 }, 00:09:24.275 "ctrlr_data": { 00:09:24.275 "cntlid": 1, 00:09:24.275 "vendor_id": "0x8086", 00:09:24.275 "model_number": "SPDK bdev Controller", 00:09:24.275 "serial_number": "SPDK0", 00:09:24.275 "firmware_revision": "24.09.1", 00:09:24.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:24.275 "oacs": { 00:09:24.275 "security": 0, 00:09:24.275 "format": 0, 00:09:24.275 "firmware": 0, 00:09:24.275 "ns_manage": 0 00:09:24.275 }, 00:09:24.275 "multi_ctrlr": true, 00:09:24.275 "ana_reporting": false 00:09:24.275 }, 00:09:24.275 "vs": { 00:09:24.275 "nvme_version": "1.3" 00:09:24.275 }, 00:09:24.275 "ns_data": { 00:09:24.275 "id": 1, 00:09:24.275 "can_share": true 00:09:24.275 } 00:09:24.275 } 00:09:24.275 ], 00:09:24.275 "mp_policy": "active_passive" 00:09:24.275 } 00:09:24.275 } 00:09:24.275 ] 00:09:24.275 02:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:24.275 02:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=76172 00:09:24.275 02:13:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:24.275 Running I/O for 10 seconds... 00:09:25.212 Latency(us) 00:09:25.212 [2024-11-08T02:13:27.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.212 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:25.212 [2024-11-08T02:13:27.096Z] =================================================================================================================== 00:09:25.212 [2024-11-08T02:13:27.096Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:25.212 00:09:26.149 02:13:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e24894dd-c3a4-4644-9590-edf1dcaef51a 00:09:26.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.413 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:26.413 [2024-11-08T02:13:28.297Z] =================================================================================================================== 00:09:26.413 [2024-11-08T02:13:28.297Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:26.413 00:09:26.674 true 00:09:26.674 02:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e24894dd-c3a4-4644-9590-edf1dcaef51a 00:09:26.674 02:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:26.932 02:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:26.932 02:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:26.932 02:13:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 76172 00:09:27.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.191 Nvme0n1 : 3.00 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:09:27.191 [2024-11-08T02:13:29.075Z] =================================================================================================================== 00:09:27.191 [2024-11-08T02:13:29.075Z] Total : 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:09:27.191 00:09:28.574 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.574 Nvme0n1 : 4.00 6762.75 26.42 0.00 0.00 0.00 0.00 0.00 00:09:28.574 [2024-11-08T02:13:30.458Z] =================================================================================================================== 00:09:28.574 [2024-11-08T02:13:30.458Z] Total : 6762.75 26.42 0.00 0.00 0.00 0.00 0.00 00:09:28.574 00:09:29.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.204 Nvme0n1 : 5.00 6705.60 26.19 0.00 0.00 0.00 0.00 0.00 00:09:29.204 [2024-11-08T02:13:31.088Z] =================================================================================================================== 00:09:29.204 [2024-11-08T02:13:31.088Z] Total : 6705.60 26.19 0.00 0.00 0.00 0.00 0.00 00:09:29.204 00:09:30.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.582 Nvme0n1 : 6.00 6563.33 25.64 0.00 0.00 0.00 0.00 0.00 00:09:30.582 [2024-11-08T02:13:32.466Z] =================================================================================================================== 00:09:30.582 [2024-11-08T02:13:32.466Z] Total : 6563.33 25.64 0.00 0.00 0.00 0.00 0.00 00:09:30.582 00:09:31.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.519 Nvme0n1 : 7.00 6551.00 25.59 0.00 0.00 0.00 0.00 0.00 00:09:31.519 [2024-11-08T02:13:33.403Z] =================================================================================================================== 00:09:31.519 [2024-11-08T02:13:33.403Z] Total : 6551.00 25.59 0.00 0.00 0.00 0.00 0.00 00:09:31.519 00:09:32.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.455 Nvme0n1 : 8.00 6557.62 25.62 0.00 0.00 0.00 0.00 0.00 00:09:32.455 [2024-11-08T02:13:34.339Z] =================================================================================================================== 00:09:32.455 [2024-11-08T02:13:34.339Z] Total : 6557.62 25.62 0.00 0.00 0.00 0.00 0.00 00:09:32.455 00:09:33.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.392 Nvme0n1 : 9.00 6548.67 25.58 0.00 0.00 0.00 0.00 0.00 00:09:33.392 [2024-11-08T02:13:35.276Z] =================================================================================================================== 00:09:33.392 [2024-11-08T02:13:35.276Z] Total : 6548.67 25.58 0.00 0.00 0.00 0.00 0.00 00:09:33.392 00:09:34.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.330 Nvme0n1 : 10.00 6515.20 25.45 0.00 0.00 0.00 0.00 0.00 00:09:34.330 [2024-11-08T02:13:36.214Z] =================================================================================================================== 00:09:34.330 [2024-11-08T02:13:36.214Z] Total : 6515.20 25.45 0.00 0.00 0.00 0.00 0.00 00:09:34.330 00:09:34.330 00:09:34.330 Latency(us) 00:09:34.330 [2024-11-08T02:13:36.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.330 Nvme0n1 : 10.02 6517.99 25.46 0.00 0.00 19633.10 10843.23 126782.37 00:09:34.330 [2024-11-08T02:13:36.214Z] =================================================================================================================== 00:09:34.330 [2024-11-08T02:13:36.214Z] Total : 6517.99 25.46 0.00 0.00 19633.10 10843.23 126782.37 00:09:34.330 { 00:09:34.330 "results": [ 00:09:34.330 { 00:09:34.330 "job": "Nvme0n1", 00:09:34.330 "core_mask": "0x2", 00:09:34.330 "workload": "randwrite", 00:09:34.330 "status": "finished", 00:09:34.330 "queue_depth": 128, 00:09:34.330 "io_size": 4096, 00:09:34.330 "runtime": 10.01536, 00:09:34.330 "iops": 6517.988369863889, 00:09:34.330 "mibps": 25.460892069780815, 00:09:34.330 "io_failed": 0, 00:09:34.330 "io_timeout": 0, 00:09:34.330 "avg_latency_us": 19633.09724064171, 00:09:34.330 "min_latency_us": 10843.229090909092, 00:09:34.330 "max_latency_us": 126782.37090909091 00:09:34.330 } 00:09:34.330 ], 00:09:34.330 "core_count": 1 00:09:34.330 } 00:09:34.330 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 76156 00:09:34.330 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 76156 ']' 00:09:34.330 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 76156 00:09:34.330 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:34.330 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.330 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76156 00:09:34.330 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:34.330 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:34.330 killing process with pid 76156 00:09:34.330 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76156' 00:09:34.330 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 76156 00:09:34.330 Received shutdown signal, test time was about 10.000000 seconds 00:09:34.330 00:09:34.330 Latency(us) 00:09:34.330 [2024-11-08T02:13:36.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.330 [2024-11-08T02:13:36.214Z] =================================================================================================================== 00:09:34.330 [2024-11-08T02:13:36.214Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:34.330 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 76156 00:09:34.589 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:34.849 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:35.108 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e24894dd-c3a4-4644-9590-edf1dcaef51a 00:09:35.108 02:13:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:35.367 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:35.367 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:35.367 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:35.626 [2024-11-08 02:13:37.342747] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:35.626 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e24894dd-c3a4-4644-9590-edf1dcaef51a 00:09:35.626 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:35.626 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e24894dd-c3a4-4644-9590-edf1dcaef51a 00:09:35.626 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.626 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.626 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.626 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.626 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.626 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.626 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.626 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:35.626 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e24894dd-c3a4-4644-9590-edf1dcaef51a 00:09:35.885 request: 00:09:35.885 { 00:09:35.885 "uuid": "e24894dd-c3a4-4644-9590-edf1dcaef51a", 00:09:35.885 "method": "bdev_lvol_get_lvstores", 00:09:35.885 "req_id": 1 00:09:35.885 } 00:09:35.885 Got JSON-RPC error response 00:09:35.885 response: 00:09:35.885 { 00:09:35.886 "code": -19, 00:09:35.886 "message": "No such device" 00:09:35.886 } 00:09:35.886 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:35.886 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:35.886 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:35.886 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:35.886 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:36.145 aio_bdev 00:09:36.145 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 66de7291-8add-4483-9094-ec477dc458e0 00:09:36.145 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=66de7291-8add-4483-9094-ec477dc458e0 00:09:36.145 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.145 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:36.145 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.145 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.145 02:13:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:36.405 02:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 66de7291-8add-4483-9094-ec477dc458e0 -t 2000 00:09:36.664 [ 00:09:36.664 { 00:09:36.664 "name": "66de7291-8add-4483-9094-ec477dc458e0", 00:09:36.664 "aliases": [ 00:09:36.664 "lvs/lvol" 00:09:36.664 ], 00:09:36.664 "product_name": "Logical Volume", 00:09:36.664 "block_size": 4096, 00:09:36.664 "num_blocks": 38912, 00:09:36.664 "uuid": "66de7291-8add-4483-9094-ec477dc458e0", 00:09:36.664 "assigned_rate_limits": { 00:09:36.664 "rw_ios_per_sec": 0, 00:09:36.664 "rw_mbytes_per_sec": 0, 00:09:36.664 "r_mbytes_per_sec": 0, 00:09:36.664 "w_mbytes_per_sec": 0 00:09:36.664 }, 00:09:36.664 "claimed": false, 00:09:36.664 "zoned": false, 00:09:36.664 "supported_io_types": { 00:09:36.664 "read": true, 00:09:36.664 "write": true, 00:09:36.664 "unmap": true, 00:09:36.664 "flush": false, 00:09:36.664 "reset": true, 00:09:36.664 "nvme_admin": false, 00:09:36.664 "nvme_io": false, 00:09:36.664 "nvme_io_md": false, 00:09:36.664 "write_zeroes": true, 00:09:36.664 "zcopy": false, 00:09:36.664 "get_zone_info": false, 00:09:36.664 "zone_management": false, 00:09:36.664 "zone_append": false, 00:09:36.664 "compare": false, 00:09:36.664 "compare_and_write": false, 00:09:36.664 "abort": false, 00:09:36.664 "seek_hole": true, 00:09:36.664 "seek_data": true, 00:09:36.664 "copy": false, 00:09:36.664 "nvme_iov_md": false 00:09:36.664 }, 00:09:36.664 "driver_specific": { 00:09:36.664 "lvol": { 00:09:36.664 "lvol_store_uuid": "e24894dd-c3a4-4644-9590-edf1dcaef51a", 00:09:36.664 "base_bdev": "aio_bdev", 00:09:36.664 "thin_provision": false, 00:09:36.664 "num_allocated_clusters": 38, 00:09:36.664 "snapshot": false, 00:09:36.664 "clone": false, 00:09:36.664 "esnap_clone": false 00:09:36.664 } 00:09:36.664 } 00:09:36.664 } 00:09:36.664 ] 00:09:36.664 02:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:36.664 02:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:36.664 02:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e24894dd-c3a4-4644-9590-edf1dcaef51a 00:09:37.232 02:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:37.232 02:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e24894dd-c3a4-4644-9590-edf1dcaef51a 00:09:37.232 02:13:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:37.232 02:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:37.232 02:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 66de7291-8add-4483-9094-ec477dc458e0 00:09:37.491 02:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e24894dd-c3a4-4644-9590-edf1dcaef51a 00:09:37.750 02:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:38.008 02:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:38.576 ************************************ 00:09:38.576 END TEST lvs_grow_clean 00:09:38.576 ************************************ 00:09:38.576 00:09:38.576 real 0m17.911s 00:09:38.576 user 0m16.820s 00:09:38.576 sys 0m2.452s 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.576 ************************************ 00:09:38.576 START TEST lvs_grow_dirty 00:09:38.576 ************************************ 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:38.576 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:38.835 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:38.835 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:39.094 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:39.094 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:39.094 02:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:39.353 02:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:39.353 02:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:39.353 02:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 lvol 150 00:09:39.613 02:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=84de5753-9cc1-4bb7-84c2-b6abbefd148d 00:09:39.613 02:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:39.613 02:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:39.871 [2024-11-08 02:13:41.554916] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:39.871 [2024-11-08 02:13:41.555007] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:39.871 true 00:09:39.871 02:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:39.871 02:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:40.130 02:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:40.130 02:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:40.388 02:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 84de5753-9cc1-4bb7-84c2-b6abbefd148d 00:09:40.647 02:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:40.905 [2024-11-08 02:13:42.591545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:40.905 02:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:41.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:41.164 02:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=76420 00:09:41.164 02:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:41.164 02:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:41.164 02:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 76420 /var/tmp/bdevperf.sock 00:09:41.164 02:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 76420 ']' 00:09:41.164 02:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:41.164 02:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:41.164 02:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:41.164 02:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:41.164 02:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:41.164 [2024-11-08 02:13:42.897004] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:41.164 [2024-11-08 02:13:42.897332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76420 ] 00:09:41.164 [2024-11-08 02:13:43.035013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.423 [2024-11-08 02:13:43.077014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.423 [2024-11-08 02:13:43.110867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.990 02:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:41.990 02:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:41.990 02:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:42.249 Nvme0n1 00:09:42.249 02:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:42.843 [ 00:09:42.843 { 00:09:42.843 "name": "Nvme0n1", 00:09:42.843 "aliases": [ 00:09:42.843 "84de5753-9cc1-4bb7-84c2-b6abbefd148d" 00:09:42.843 ], 00:09:42.843 "product_name": "NVMe disk", 00:09:42.843 "block_size": 4096, 00:09:42.843 "num_blocks": 38912, 00:09:42.843 "uuid": "84de5753-9cc1-4bb7-84c2-b6abbefd148d", 00:09:42.843 "numa_id": -1, 00:09:42.843 "assigned_rate_limits": { 00:09:42.843 "rw_ios_per_sec": 0, 00:09:42.843 "rw_mbytes_per_sec": 0, 00:09:42.843 "r_mbytes_per_sec": 0, 00:09:42.843 "w_mbytes_per_sec": 0 00:09:42.843 }, 00:09:42.843 "claimed": false, 00:09:42.843 "zoned": false, 00:09:42.843 "supported_io_types": { 00:09:42.843 "read": true, 00:09:42.843 "write": true, 00:09:42.843 "unmap": true, 00:09:42.843 "flush": true, 00:09:42.843 "reset": true, 00:09:42.843 "nvme_admin": true, 00:09:42.843 "nvme_io": true, 00:09:42.843 "nvme_io_md": false, 00:09:42.843 "write_zeroes": true, 00:09:42.843 "zcopy": false, 00:09:42.843 "get_zone_info": false, 00:09:42.843 "zone_management": false, 00:09:42.843 "zone_append": false, 00:09:42.843 "compare": true, 00:09:42.843 "compare_and_write": true, 00:09:42.843 "abort": true, 00:09:42.843 "seek_hole": false, 00:09:42.843 "seek_data": false, 00:09:42.843 "copy": true, 00:09:42.843 "nvme_iov_md": false 00:09:42.843 }, 00:09:42.843 "memory_domains": [ 00:09:42.843 { 00:09:42.843 "dma_device_id": "system", 00:09:42.843 "dma_device_type": 1 00:09:42.843 } 00:09:42.843 ], 00:09:42.843 "driver_specific": { 00:09:42.843 "nvme": [ 00:09:42.843 { 00:09:42.843 "trid": { 00:09:42.843 "trtype": "TCP", 00:09:42.843 "adrfam": "IPv4", 00:09:42.843 "traddr": "10.0.0.3", 00:09:42.843 "trsvcid": "4420", 00:09:42.843 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:42.843 }, 00:09:42.843 "ctrlr_data": { 00:09:42.843 "cntlid": 1, 00:09:42.843 "vendor_id": "0x8086", 00:09:42.843 "model_number": "SPDK bdev Controller", 00:09:42.843 "serial_number": "SPDK0", 00:09:42.843 "firmware_revision": "24.09.1", 00:09:42.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:42.843 "oacs": { 00:09:42.843 "security": 0, 00:09:42.843 "format": 0, 00:09:42.843 "firmware": 0, 00:09:42.843 "ns_manage": 0 00:09:42.843 }, 00:09:42.843 "multi_ctrlr": true, 00:09:42.843 "ana_reporting": false 00:09:42.843 }, 00:09:42.843 "vs": { 00:09:42.843 "nvme_version": "1.3" 00:09:42.843 }, 00:09:42.843 "ns_data": { 00:09:42.843 "id": 1, 00:09:42.843 "can_share": true 00:09:42.843 } 00:09:42.843 } 00:09:42.843 ], 00:09:42.843 "mp_policy": "active_passive" 00:09:42.843 } 00:09:42.843 } 00:09:42.843 ] 00:09:42.843 02:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=76449 00:09:42.843 02:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:42.843 02:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:42.843 Running I/O for 10 seconds... 00:09:43.779 Latency(us) 00:09:43.779 [2024-11-08T02:13:45.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.779 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:43.779 [2024-11-08T02:13:45.663Z] =================================================================================================================== 00:09:43.779 [2024-11-08T02:13:45.663Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:43.779 00:09:44.716 02:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:44.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.716 Nvme0n1 : 2.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:44.716 [2024-11-08T02:13:46.600Z] =================================================================================================================== 00:09:44.716 [2024-11-08T02:13:46.600Z] Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:44.716 00:09:44.975 true 00:09:44.975 02:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:44.975 02:13:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:45.542 02:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:45.542 02:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:45.542 02:13:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 76449 00:09:45.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.802 Nvme0n1 : 3.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:45.802 [2024-11-08T02:13:47.686Z] =================================================================================================================== 00:09:45.802 [2024-11-08T02:13:47.686Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:45.802 00:09:46.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.736 Nvme0n1 : 4.00 6635.75 25.92 0.00 0.00 0.00 0.00 0.00 00:09:46.736 [2024-11-08T02:13:48.620Z] =================================================================================================================== 00:09:46.736 [2024-11-08T02:13:48.620Z] Total : 6635.75 25.92 0.00 0.00 0.00 0.00 0.00 00:09:46.736 00:09:48.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.113 Nvme0n1 : 5.00 6629.40 25.90 0.00 0.00 0.00 0.00 0.00 00:09:48.113 [2024-11-08T02:13:49.997Z] =================================================================================================================== 00:09:48.113 [2024-11-08T02:13:49.997Z] Total : 6629.40 25.90 0.00 0.00 0.00 0.00 0.00 00:09:48.113 00:09:49.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.048 Nvme0n1 : 6.00 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:09:49.048 [2024-11-08T02:13:50.932Z] =================================================================================================================== 00:09:49.048 [2024-11-08T02:13:50.932Z] Total : 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:09:49.048 00:09:49.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.983 Nvme0n1 : 7.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:49.983 [2024-11-08T02:13:51.867Z] =================================================================================================================== 00:09:49.983 [2024-11-08T02:13:51.867Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:49.983 00:09:50.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.921 Nvme0n1 : 8.00 6474.25 25.29 0.00 0.00 0.00 0.00 0.00 00:09:50.921 [2024-11-08T02:13:52.805Z] =================================================================================================================== 00:09:50.921 [2024-11-08T02:13:52.805Z] Total : 6474.25 25.29 0.00 0.00 0.00 0.00 0.00 00:09:50.921 00:09:51.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.857 Nvme0n1 : 9.00 6446.33 25.18 0.00 0.00 0.00 0.00 0.00 00:09:51.857 [2024-11-08T02:13:53.741Z] =================================================================================================================== 00:09:51.857 [2024-11-08T02:13:53.741Z] Total : 6446.33 25.18 0.00 0.00 0.00 0.00 0.00 00:09:51.857 00:09:52.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.795 Nvme0n1 : 10.00 6436.70 25.14 0.00 0.00 0.00 0.00 0.00 00:09:52.795 [2024-11-08T02:13:54.679Z] =================================================================================================================== 00:09:52.795 [2024-11-08T02:13:54.679Z] Total : 6436.70 25.14 0.00 0.00 0.00 0.00 0.00 00:09:52.795 00:09:52.795 00:09:52.795 Latency(us) 00:09:52.795 [2024-11-08T02:13:54.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.795 Nvme0n1 : 10.02 6437.20 25.15 0.00 0.00 19878.52 15132.86 158239.65 00:09:52.795 [2024-11-08T02:13:54.679Z] =================================================================================================================== 00:09:52.795 [2024-11-08T02:13:54.679Z] Total : 6437.20 25.15 0.00 0.00 19878.52 15132.86 158239.65 00:09:52.795 { 00:09:52.795 "results": [ 00:09:52.795 { 00:09:52.795 "job": "Nvme0n1", 00:09:52.795 "core_mask": "0x2", 00:09:52.795 "workload": "randwrite", 00:09:52.795 "status": "finished", 00:09:52.795 "queue_depth": 128, 00:09:52.795 "io_size": 4096, 00:09:52.795 "runtime": 10.019106, 00:09:52.795 "iops": 6437.201083609655, 00:09:52.795 "mibps": 25.145316732850215, 00:09:52.795 "io_failed": 0, 00:09:52.795 "io_timeout": 0, 00:09:52.795 "avg_latency_us": 19878.518786713557, 00:09:52.795 "min_latency_us": 15132.858181818181, 00:09:52.795 "max_latency_us": 158239.6509090909 00:09:52.795 } 00:09:52.795 ], 00:09:52.795 "core_count": 1 00:09:52.795 } 00:09:52.795 02:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 76420 00:09:52.795 02:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 76420 ']' 00:09:52.795 02:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 76420 00:09:52.795 02:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:52.795 02:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.795 02:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76420 00:09:52.795 killing process with pid 76420 00:09:52.795 Received shutdown signal, test time was about 10.000000 seconds 00:09:52.795 00:09:52.795 Latency(us) 00:09:52.795 [2024-11-08T02:13:54.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.795 [2024-11-08T02:13:54.679Z] =================================================================================================================== 00:09:52.795 [2024-11-08T02:13:54.679Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:52.795 02:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:52.795 02:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:52.795 02:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76420' 00:09:52.795 02:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 76420 00:09:52.795 02:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 76420 00:09:53.054 02:13:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:53.312 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:53.571 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:53.571 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 76075 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 76075 00:09:53.830 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 76075 Killed "${NVMF_APP[@]}" "$@" 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:53.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=76582 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 76582 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 76582 ']' 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:53.830 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:54.089 [2024-11-08 02:13:55.723824] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:54.089 [2024-11-08 02:13:55.724189] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.089 [2024-11-08 02:13:55.858916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.089 [2024-11-08 02:13:55.892676] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.089 [2024-11-08 02:13:55.892724] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.089 [2024-11-08 02:13:55.892750] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.089 [2024-11-08 02:13:55.892758] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.089 [2024-11-08 02:13:55.892764] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.089 [2024-11-08 02:13:55.892789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.089 [2024-11-08 02:13:55.920959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:54.089 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.089 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:54.089 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:54.089 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:54.089 02:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:54.347 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.348 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:54.606 [2024-11-08 02:13:56.287252] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:54.606 [2024-11-08 02:13:56.287705] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:54.606 [2024-11-08 02:13:56.288306] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:54.606 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:54.606 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 84de5753-9cc1-4bb7-84c2-b6abbefd148d 00:09:54.606 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=84de5753-9cc1-4bb7-84c2-b6abbefd148d 00:09:54.606 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.606 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:54.606 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.606 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.606 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:54.865 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84de5753-9cc1-4bb7-84c2-b6abbefd148d -t 2000 00:09:55.127 [ 00:09:55.127 { 00:09:55.127 "name": "84de5753-9cc1-4bb7-84c2-b6abbefd148d", 00:09:55.127 "aliases": [ 00:09:55.127 "lvs/lvol" 00:09:55.127 ], 00:09:55.127 "product_name": "Logical Volume", 00:09:55.127 "block_size": 4096, 00:09:55.127 "num_blocks": 38912, 00:09:55.127 "uuid": "84de5753-9cc1-4bb7-84c2-b6abbefd148d", 00:09:55.127 "assigned_rate_limits": { 00:09:55.127 "rw_ios_per_sec": 0, 00:09:55.127 "rw_mbytes_per_sec": 0, 00:09:55.127 "r_mbytes_per_sec": 0, 00:09:55.127 "w_mbytes_per_sec": 0 00:09:55.127 }, 00:09:55.127 "claimed": false, 00:09:55.127 "zoned": false, 00:09:55.127 "supported_io_types": { 00:09:55.127 "read": true, 00:09:55.127 "write": true, 00:09:55.127 "unmap": true, 00:09:55.127 "flush": false, 00:09:55.127 "reset": true, 00:09:55.127 "nvme_admin": false, 00:09:55.127 "nvme_io": false, 00:09:55.127 "nvme_io_md": false, 00:09:55.127 "write_zeroes": true, 00:09:55.127 "zcopy": false, 00:09:55.127 "get_zone_info": false, 00:09:55.127 "zone_management": false, 00:09:55.127 "zone_append": false, 00:09:55.127 "compare": false, 00:09:55.127 "compare_and_write": false, 00:09:55.127 "abort": false, 00:09:55.127 "seek_hole": true, 00:09:55.127 "seek_data": true, 00:09:55.127 "copy": false, 00:09:55.127 "nvme_iov_md": false 00:09:55.127 }, 00:09:55.127 "driver_specific": { 00:09:55.127 "lvol": { 00:09:55.127 "lvol_store_uuid": "8cc81060-d2c0-476f-bdeb-ca65ccfc4833", 00:09:55.127 "base_bdev": "aio_bdev", 00:09:55.127 "thin_provision": false, 00:09:55.127 "num_allocated_clusters": 38, 00:09:55.127 "snapshot": false, 00:09:55.127 "clone": false, 00:09:55.127 "esnap_clone": false 00:09:55.127 } 00:09:55.127 } 00:09:55.127 } 00:09:55.127 ] 00:09:55.127 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:55.127 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:55.127 02:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:55.392 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:55.392 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:55.392 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:55.652 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:55.652 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:55.909 [2024-11-08 02:13:57.665379] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:55.909 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:55.909 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:55.909 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:55.909 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:55.909 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:55.909 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:55.909 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:55.909 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:55.909 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:55.909 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:55.909 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:55.909 02:13:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:56.168 request: 00:09:56.168 { 00:09:56.168 "uuid": "8cc81060-d2c0-476f-bdeb-ca65ccfc4833", 00:09:56.168 "method": "bdev_lvol_get_lvstores", 00:09:56.168 "req_id": 1 00:09:56.168 } 00:09:56.168 Got JSON-RPC error response 00:09:56.168 response: 00:09:56.168 { 00:09:56.168 "code": -19, 00:09:56.168 "message": "No such device" 00:09:56.168 } 00:09:56.168 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:56.168 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:56.168 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:56.168 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:56.168 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:56.427 aio_bdev 00:09:56.427 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 84de5753-9cc1-4bb7-84c2-b6abbefd148d 00:09:56.427 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=84de5753-9cc1-4bb7-84c2-b6abbefd148d 00:09:56.427 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.427 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:56.427 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.427 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.427 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:56.686 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 84de5753-9cc1-4bb7-84c2-b6abbefd148d -t 2000 00:09:56.944 [ 00:09:56.944 { 00:09:56.944 "name": "84de5753-9cc1-4bb7-84c2-b6abbefd148d", 00:09:56.944 "aliases": [ 00:09:56.944 "lvs/lvol" 00:09:56.944 ], 00:09:56.944 "product_name": "Logical Volume", 00:09:56.944 "block_size": 4096, 00:09:56.944 "num_blocks": 38912, 00:09:56.944 "uuid": "84de5753-9cc1-4bb7-84c2-b6abbefd148d", 00:09:56.944 "assigned_rate_limits": { 00:09:56.944 "rw_ios_per_sec": 0, 00:09:56.944 "rw_mbytes_per_sec": 0, 00:09:56.944 "r_mbytes_per_sec": 0, 00:09:56.944 "w_mbytes_per_sec": 0 00:09:56.944 }, 00:09:56.944 "claimed": false, 00:09:56.944 "zoned": false, 00:09:56.944 "supported_io_types": { 00:09:56.944 "read": true, 00:09:56.944 "write": true, 00:09:56.944 "unmap": true, 00:09:56.944 "flush": false, 00:09:56.944 "reset": true, 00:09:56.944 "nvme_admin": false, 00:09:56.944 "nvme_io": false, 00:09:56.944 "nvme_io_md": false, 00:09:56.944 "write_zeroes": true, 00:09:56.944 "zcopy": false, 00:09:56.944 "get_zone_info": false, 00:09:56.944 "zone_management": false, 00:09:56.944 "zone_append": false, 00:09:56.944 "compare": false, 00:09:56.944 "compare_and_write": false, 00:09:56.944 "abort": false, 00:09:56.944 "seek_hole": true, 00:09:56.944 "seek_data": true, 00:09:56.944 "copy": false, 00:09:56.944 "nvme_iov_md": false 00:09:56.944 }, 00:09:56.944 "driver_specific": { 00:09:56.944 "lvol": { 00:09:56.944 "lvol_store_uuid": "8cc81060-d2c0-476f-bdeb-ca65ccfc4833", 00:09:56.944 "base_bdev": "aio_bdev", 00:09:56.944 "thin_provision": false, 00:09:56.944 "num_allocated_clusters": 38, 00:09:56.944 "snapshot": false, 00:09:56.944 "clone": false, 00:09:56.944 "esnap_clone": false 00:09:56.944 } 00:09:56.944 } 00:09:56.944 } 00:09:56.944 ] 00:09:56.944 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:56.944 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:56.944 02:13:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:57.511 02:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:57.511 02:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:57.511 02:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:57.511 02:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:57.511 02:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 84de5753-9cc1-4bb7-84c2-b6abbefd148d 00:09:57.769 02:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8cc81060-d2c0-476f-bdeb-ca65ccfc4833 00:09:58.028 02:13:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:58.286 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:58.854 ************************************ 00:09:58.854 END TEST lvs_grow_dirty 00:09:58.854 ************************************ 00:09:58.854 00:09:58.854 real 0m20.195s 00:09:58.854 user 0m41.070s 00:09:58.854 sys 0m9.205s 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:58.854 nvmf_trace.0 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.854 rmmod nvme_tcp 00:09:58.854 rmmod nvme_fabrics 00:09:58.854 rmmod nvme_keyring 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 76582 ']' 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 76582 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 76582 ']' 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 76582 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.854 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76582 00:09:59.113 killing process with pid 76582 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76582' 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 76582 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 76582 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:59.113 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:59.114 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:59.114 02:14:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:59.372 00:09:59.372 real 0m40.109s 00:09:59.372 user 1m3.736s 00:09:59.372 sys 0m12.375s 00:09:59.372 ************************************ 00:09:59.372 END TEST nvmf_lvs_grow 00:09:59.372 ************************************ 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.372 ************************************ 00:09:59.372 START TEST nvmf_bdev_io_wait 00:09:59.372 ************************************ 00:09:59.372 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:59.373 * Looking for test storage... 00:09:59.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:59.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.632 --rc genhtml_branch_coverage=1 00:09:59.632 --rc genhtml_function_coverage=1 00:09:59.632 --rc genhtml_legend=1 00:09:59.632 --rc geninfo_all_blocks=1 00:09:59.632 --rc geninfo_unexecuted_blocks=1 00:09:59.632 00:09:59.632 ' 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:59.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.632 --rc genhtml_branch_coverage=1 00:09:59.632 --rc genhtml_function_coverage=1 00:09:59.632 --rc genhtml_legend=1 00:09:59.632 --rc geninfo_all_blocks=1 00:09:59.632 --rc geninfo_unexecuted_blocks=1 00:09:59.632 00:09:59.632 ' 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:59.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.632 --rc genhtml_branch_coverage=1 00:09:59.632 --rc genhtml_function_coverage=1 00:09:59.632 --rc genhtml_legend=1 00:09:59.632 --rc geninfo_all_blocks=1 00:09:59.632 --rc geninfo_unexecuted_blocks=1 00:09:59.632 00:09:59.632 ' 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:59.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.632 --rc genhtml_branch_coverage=1 00:09:59.632 --rc genhtml_function_coverage=1 00:09:59.632 --rc genhtml_legend=1 00:09:59.632 --rc geninfo_all_blocks=1 00:09:59.632 --rc geninfo_unexecuted_blocks=1 00:09:59.632 00:09:59.632 ' 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.632 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.633 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:59.633 Cannot find device "nvmf_init_br" 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:59.633 Cannot find device "nvmf_init_br2" 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:59.633 Cannot find device "nvmf_tgt_br" 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.633 Cannot find device "nvmf_tgt_br2" 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:59.633 Cannot find device "nvmf_init_br" 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:59.633 Cannot find device "nvmf_init_br2" 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:59.633 Cannot find device "nvmf_tgt_br" 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:59.633 Cannot find device "nvmf_tgt_br2" 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:59.633 Cannot find device "nvmf_br" 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:59.633 Cannot find device "nvmf_init_if" 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:59.633 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:59.893 Cannot find device "nvmf_init_if2" 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.893 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:59.893 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:00.152 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:00.152 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:10:00.152 00:10:00.152 --- 10.0.0.3 ping statistics --- 00:10:00.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.152 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:00.152 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:00.152 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:10:00.152 00:10:00.152 --- 10.0.0.4 ping statistics --- 00:10:00.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.152 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:00.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:10:00.152 00:10:00.152 --- 10.0.0.1 ping statistics --- 00:10:00.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.152 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:00.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:10:00.152 00:10:00.152 --- 10.0.0.2 ping statistics --- 00:10:00.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.152 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=76940 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 76940 00:10:00.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 76940 ']' 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.152 02:14:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:00.152 [2024-11-08 02:14:01.893340] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:00.152 [2024-11-08 02:14:01.893683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.410 [2024-11-08 02:14:02.036937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.410 [2024-11-08 02:14:02.081090] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.410 [2024-11-08 02:14:02.081386] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.410 [2024-11-08 02:14:02.081558] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.410 [2024-11-08 02:14:02.081698] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.410 [2024-11-08 02:14:02.081717] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.410 [2024-11-08 02:14:02.081879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.410 [2024-11-08 02:14:02.082039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.410 [2024-11-08 02:14:02.082151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.410 [2024-11-08 02:14:02.082154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.976 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.976 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:00.976 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:00.976 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:00.976 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.234 [2024-11-08 02:14:02.940352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.234 [2024-11-08 02:14:02.954914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.234 Malloc0 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.234 02:14:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.234 [2024-11-08 02:14:03.019474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=76975 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=76977 00:10:01.234 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:01.235 { 00:10:01.235 "params": { 00:10:01.235 "name": "Nvme$subsystem", 00:10:01.235 "trtype": "$TEST_TRANSPORT", 00:10:01.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.235 "adrfam": "ipv4", 00:10:01.235 "trsvcid": "$NVMF_PORT", 00:10:01.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.235 "hdgst": ${hdgst:-false}, 00:10:01.235 "ddgst": ${ddgst:-false} 00:10:01.235 }, 00:10:01.235 "method": "bdev_nvme_attach_controller" 00:10:01.235 } 00:10:01.235 EOF 00:10:01.235 )") 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=76979 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:01.235 { 00:10:01.235 "params": { 00:10:01.235 "name": "Nvme$subsystem", 00:10:01.235 "trtype": "$TEST_TRANSPORT", 00:10:01.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.235 "adrfam": "ipv4", 00:10:01.235 "trsvcid": "$NVMF_PORT", 00:10:01.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.235 "hdgst": ${hdgst:-false}, 00:10:01.235 "ddgst": ${ddgst:-false} 00:10:01.235 }, 00:10:01.235 "method": "bdev_nvme_attach_controller" 00:10:01.235 } 00:10:01.235 EOF 00:10:01.235 )") 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=76982 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:01.235 { 00:10:01.235 "params": { 00:10:01.235 "name": "Nvme$subsystem", 00:10:01.235 "trtype": "$TEST_TRANSPORT", 00:10:01.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.235 "adrfam": "ipv4", 00:10:01.235 "trsvcid": "$NVMF_PORT", 00:10:01.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.235 "hdgst": ${hdgst:-false}, 00:10:01.235 "ddgst": ${ddgst:-false} 00:10:01.235 }, 00:10:01.235 "method": "bdev_nvme_attach_controller" 00:10:01.235 } 00:10:01.235 EOF 00:10:01.235 )") 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:01.235 { 00:10:01.235 "params": { 00:10:01.235 "name": "Nvme$subsystem", 00:10:01.235 "trtype": "$TEST_TRANSPORT", 00:10:01.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:01.235 "adrfam": "ipv4", 00:10:01.235 "trsvcid": "$NVMF_PORT", 00:10:01.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:01.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:01.235 "hdgst": ${hdgst:-false}, 00:10:01.235 "ddgst": ${ddgst:-false} 00:10:01.235 }, 00:10:01.235 "method": "bdev_nvme_attach_controller" 00:10:01.235 } 00:10:01.235 EOF 00:10:01.235 )") 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:01.235 "params": { 00:10:01.235 "name": "Nvme1", 00:10:01.235 "trtype": "tcp", 00:10:01.235 "traddr": "10.0.0.3", 00:10:01.235 "adrfam": "ipv4", 00:10:01.235 "trsvcid": "4420", 00:10:01.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.235 "hdgst": false, 00:10:01.235 "ddgst": false 00:10:01.235 }, 00:10:01.235 "method": "bdev_nvme_attach_controller" 00:10:01.235 }' 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:01.235 "params": { 00:10:01.235 "name": "Nvme1", 00:10:01.235 "trtype": "tcp", 00:10:01.235 "traddr": "10.0.0.3", 00:10:01.235 "adrfam": "ipv4", 00:10:01.235 "trsvcid": "4420", 00:10:01.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.235 "hdgst": false, 00:10:01.235 "ddgst": false 00:10:01.235 }, 00:10:01.235 "method": "bdev_nvme_attach_controller" 00:10:01.235 }' 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:01.235 "params": { 00:10:01.235 "name": "Nvme1", 00:10:01.235 "trtype": "tcp", 00:10:01.235 "traddr": "10.0.0.3", 00:10:01.235 "adrfam": "ipv4", 00:10:01.235 "trsvcid": "4420", 00:10:01.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.235 "hdgst": false, 00:10:01.235 "ddgst": false 00:10:01.235 }, 00:10:01.235 "method": "bdev_nvme_attach_controller" 00:10:01.235 }' 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:01.235 "params": { 00:10:01.235 "name": "Nvme1", 00:10:01.235 "trtype": "tcp", 00:10:01.235 "traddr": "10.0.0.3", 00:10:01.235 "adrfam": "ipv4", 00:10:01.235 "trsvcid": "4420", 00:10:01.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:01.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:01.235 "hdgst": false, 00:10:01.235 "ddgst": false 00:10:01.235 }, 00:10:01.235 "method": "bdev_nvme_attach_controller" 00:10:01.235 }' 00:10:01.235 02:14:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 76975 00:10:01.235 [2024-11-08 02:14:03.090229] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:01.235 [2024-11-08 02:14:03.090465] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:01.235 [2024-11-08 02:14:03.098955] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:01.235 [2024-11-08 02:14:03.099166] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:01.235 [2024-11-08 02:14:03.099860] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:01.235 [2024-11-08 02:14:03.100075] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:01.494 [2024-11-08 02:14:03.119851] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:01.494 [2024-11-08 02:14:03.119941] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:01.494 [2024-11-08 02:14:03.273633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.494 [2024-11-08 02:14:03.300725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:01.494 [2024-11-08 02:14:03.307523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.494 [2024-11-08 02:14:03.330355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:01.494 [2024-11-08 02:14:03.332571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.494 [2024-11-08 02:14:03.352762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.751 [2024-11-08 02:14:03.378449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.751 [2024-11-08 02:14:03.380181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:01.751 [2024-11-08 02:14:03.397565] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.751 [2024-11-08 02:14:03.427255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:10:01.751 [2024-11-08 02:14:03.434004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.751 Running I/O for 1 seconds... 00:10:01.751 [2024-11-08 02:14:03.482894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.751 Running I/O for 1 seconds... 00:10:01.751 Running I/O for 1 seconds... 00:10:01.751 Running I/O for 1 seconds... 00:10:02.685 162136.00 IOPS, 633.34 MiB/s 00:10:02.685 Latency(us) 00:10:02.685 [2024-11-08T02:14:04.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.685 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:02.685 Nvme1n1 : 1.00 161783.06 631.97 0.00 0.00 786.93 400.29 2159.71 00:10:02.685 [2024-11-08T02:14:04.569Z] =================================================================================================================== 00:10:02.685 [2024-11-08T02:14:04.569Z] Total : 161783.06 631.97 0.00 0.00 786.93 400.29 2159.71 00:10:02.685 9707.00 IOPS, 37.92 MiB/s 00:10:02.685 Latency(us) 00:10:02.685 [2024-11-08T02:14:04.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.685 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:02.685 Nvme1n1 : 1.01 9742.76 38.06 0.00 0.00 13072.41 8281.37 20137.43 00:10:02.685 [2024-11-08T02:14:04.569Z] =================================================================================================================== 00:10:02.685 [2024-11-08T02:14:04.569Z] Total : 9742.76 38.06 0.00 0.00 13072.41 8281.37 20137.43 00:10:02.685 8129.00 IOPS, 31.75 MiB/s 00:10:02.685 Latency(us) 00:10:02.685 [2024-11-08T02:14:04.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.685 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:02.685 Nvme1n1 : 1.01 8191.73 32.00 0.00 0.00 15547.30 6285.50 25380.31 00:10:02.685 [2024-11-08T02:14:04.569Z] =================================================================================================================== 00:10:02.685 [2024-11-08T02:14:04.569Z] Total : 8191.73 32.00 0.00 0.00 15547.30 6285.50 25380.31 00:10:02.943 8443.00 IOPS, 32.98 MiB/s 00:10:02.943 Latency(us) 00:10:02.943 [2024-11-08T02:14:04.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.943 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:02.943 Nvme1n1 : 1.01 8523.84 33.30 0.00 0.00 14956.53 6225.92 25261.15 00:10:02.943 [2024-11-08T02:14:04.827Z] =================================================================================================================== 00:10:02.943 [2024-11-08T02:14:04.827Z] Total : 8523.84 33.30 0.00 0.00 14956.53 6225.92 25261.15 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 76977 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 76979 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 76982 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.943 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.943 rmmod nvme_tcp 00:10:02.943 rmmod nvme_fabrics 00:10:02.943 rmmod nvme_keyring 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 76940 ']' 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 76940 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 76940 ']' 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 76940 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76940 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.201 killing process with pid 76940 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76940' 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 76940 00:10:03.201 02:14:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 76940 00:10:03.201 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:03.201 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:03.201 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:03.202 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:03.202 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:10:03.202 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:03.202 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:10:03.202 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.202 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:03.202 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:03.202 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:03.202 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:03.202 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:03.461 00:10:03.461 real 0m4.114s 00:10:03.461 user 0m16.020s 00:10:03.461 sys 0m2.185s 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.461 ************************************ 00:10:03.461 END TEST nvmf_bdev_io_wait 00:10:03.461 ************************************ 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.461 ************************************ 00:10:03.461 START TEST nvmf_queue_depth 00:10:03.461 ************************************ 00:10:03.461 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:03.720 * Looking for test storage... 00:10:03.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:03.720 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:03.720 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:03.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.721 --rc genhtml_branch_coverage=1 00:10:03.721 --rc genhtml_function_coverage=1 00:10:03.721 --rc genhtml_legend=1 00:10:03.721 --rc geninfo_all_blocks=1 00:10:03.721 --rc geninfo_unexecuted_blocks=1 00:10:03.721 00:10:03.721 ' 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:03.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.721 --rc genhtml_branch_coverage=1 00:10:03.721 --rc genhtml_function_coverage=1 00:10:03.721 --rc genhtml_legend=1 00:10:03.721 --rc geninfo_all_blocks=1 00:10:03.721 --rc geninfo_unexecuted_blocks=1 00:10:03.721 00:10:03.721 ' 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:03.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.721 --rc genhtml_branch_coverage=1 00:10:03.721 --rc genhtml_function_coverage=1 00:10:03.721 --rc genhtml_legend=1 00:10:03.721 --rc geninfo_all_blocks=1 00:10:03.721 --rc geninfo_unexecuted_blocks=1 00:10:03.721 00:10:03.721 ' 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:03.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.721 --rc genhtml_branch_coverage=1 00:10:03.721 --rc genhtml_function_coverage=1 00:10:03.721 --rc genhtml_legend=1 00:10:03.721 --rc geninfo_all_blocks=1 00:10:03.721 --rc geninfo_unexecuted_blocks=1 00:10:03.721 00:10:03.721 ' 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:03.721 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:03.721 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:03.722 Cannot find device "nvmf_init_br" 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:03.722 Cannot find device "nvmf_init_br2" 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:03.722 Cannot find device "nvmf_tgt_br" 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:03.722 Cannot find device "nvmf_tgt_br2" 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:03.722 Cannot find device "nvmf_init_br" 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:03.722 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:03.981 Cannot find device "nvmf_init_br2" 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:03.981 Cannot find device "nvmf_tgt_br" 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:03.981 Cannot find device "nvmf_tgt_br2" 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:03.981 Cannot find device "nvmf_br" 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:03.981 Cannot find device "nvmf_init_if" 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:03.981 Cannot find device "nvmf_init_if2" 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:03.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:03.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:03.981 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:04.240 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:04.240 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:04.240 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:04.240 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:04.240 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:04.240 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:04.240 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:04.240 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:04.240 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:04.240 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:10:04.240 00:10:04.240 --- 10.0.0.3 ping statistics --- 00:10:04.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.240 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:04.240 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:04.240 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:04.240 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:10:04.240 00:10:04.240 --- 10.0.0.4 ping statistics --- 00:10:04.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.240 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:04.240 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:04.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:04.241 00:10:04.241 --- 10.0.0.1 ping statistics --- 00:10:04.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.241 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:04.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:10:04.241 00:10:04.241 --- 10.0.0.2 ping statistics --- 00:10:04.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.241 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=77245 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 77245 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 77245 ']' 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.241 02:14:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.241 [2024-11-08 02:14:05.996653] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:04.241 [2024-11-08 02:14:05.996749] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.500 [2024-11-08 02:14:06.140533] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.500 [2024-11-08 02:14:06.172099] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.500 [2024-11-08 02:14:06.172435] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.500 [2024-11-08 02:14:06.172471] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.500 [2024-11-08 02:14:06.172480] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.500 [2024-11-08 02:14:06.172488] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.500 [2024-11-08 02:14:06.172517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.500 [2024-11-08 02:14:06.199829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:04.500 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.500 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:04.500 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:04.500 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.500 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.500 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.500 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.500 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.500 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.500 [2024-11-08 02:14:06.295195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.500 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.500 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:04.500 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.500 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.500 Malloc0 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.501 [2024-11-08 02:14:06.359155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=77264 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 77264 /var/tmp/bdevperf.sock 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 77264 ']' 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:04.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.501 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:04.759 [2024-11-08 02:14:06.425396] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:04.759 [2024-11-08 02:14:06.425690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77264 ] 00:10:04.759 [2024-11-08 02:14:06.562629] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.759 [2024-11-08 02:14:06.595514] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.759 [2024-11-08 02:14:06.625216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.018 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.018 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:05.018 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:05.018 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.018 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:05.018 NVMe0n1 00:10:05.018 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.018 02:14:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:05.018 Running I/O for 10 seconds... 00:10:07.346 7118.00 IOPS, 27.80 MiB/s [2024-11-08T02:14:10.227Z] 7432.00 IOPS, 29.03 MiB/s [2024-11-08T02:14:11.163Z] 7642.00 IOPS, 29.85 MiB/s [2024-11-08T02:14:12.100Z] 7872.25 IOPS, 30.75 MiB/s [2024-11-08T02:14:13.035Z] 8208.00 IOPS, 32.06 MiB/s [2024-11-08T02:14:13.969Z] 8423.33 IOPS, 32.90 MiB/s [2024-11-08T02:14:14.904Z] 8608.71 IOPS, 33.63 MiB/s [2024-11-08T02:14:16.279Z] 8675.25 IOPS, 33.89 MiB/s [2024-11-08T02:14:17.215Z] 8776.33 IOPS, 34.28 MiB/s [2024-11-08T02:14:17.215Z] 8867.90 IOPS, 34.64 MiB/s 00:10:15.331 Latency(us) 00:10:15.331 [2024-11-08T02:14:17.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.331 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:15.331 Verification LBA range: start 0x0 length 0x4000 00:10:15.331 NVMe0n1 : 10.06 8902.17 34.77 0.00 0.00 114511.47 14537.08 92941.96 00:10:15.331 [2024-11-08T02:14:17.215Z] =================================================================================================================== 00:10:15.331 [2024-11-08T02:14:17.215Z] Total : 8902.17 34.77 0.00 0.00 114511.47 14537.08 92941.96 00:10:15.331 { 00:10:15.331 "results": [ 00:10:15.331 { 00:10:15.331 "job": "NVMe0n1", 00:10:15.331 "core_mask": "0x1", 00:10:15.331 "workload": "verify", 00:10:15.331 "status": "finished", 00:10:15.331 "verify_range": { 00:10:15.331 "start": 0, 00:10:15.331 "length": 16384 00:10:15.331 }, 00:10:15.331 "queue_depth": 1024, 00:10:15.331 "io_size": 4096, 00:10:15.331 "runtime": 10.059462, 00:10:15.331 "iops": 8902.165940882325, 00:10:15.331 "mibps": 34.77408570657158, 00:10:15.331 "io_failed": 0, 00:10:15.331 "io_timeout": 0, 00:10:15.331 "avg_latency_us": 114511.47358159545, 00:10:15.331 "min_latency_us": 14537.076363636364, 00:10:15.331 "max_latency_us": 92941.96363636364 00:10:15.331 } 00:10:15.331 ], 00:10:15.331 "core_count": 1 00:10:15.331 } 00:10:15.331 02:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 77264 00:10:15.331 02:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 77264 ']' 00:10:15.331 02:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 77264 00:10:15.331 02:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:15.331 02:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.331 02:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77264 00:10:15.331 02:14:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.331 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.331 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77264' 00:10:15.331 killing process with pid 77264 00:10:15.331 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 77264 00:10:15.331 Received shutdown signal, test time was about 10.000000 seconds 00:10:15.331 00:10:15.331 Latency(us) 00:10:15.331 [2024-11-08T02:14:17.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.331 [2024-11-08T02:14:17.215Z] =================================================================================================================== 00:10:15.331 [2024-11-08T02:14:17.215Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:15.331 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 77264 00:10:15.331 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:15.331 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:15.331 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:15.331 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:15.331 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:15.331 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:15.331 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:15.331 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:15.331 rmmod nvme_tcp 00:10:15.331 rmmod nvme_fabrics 00:10:15.331 rmmod nvme_keyring 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 77245 ']' 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 77245 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 77245 ']' 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 77245 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77245 00:10:15.590 killing process with pid 77245 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77245' 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 77245 00:10:15.590 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 77245 00:10:15.591 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:15.591 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:15.591 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:15.591 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:15.591 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:10:15.591 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:15.591 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:10:15.591 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:15.591 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:15.591 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:15.591 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:15.591 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:10:15.850 00:10:15.850 real 0m12.317s 00:10:15.850 user 0m21.096s 00:10:15.850 sys 0m2.071s 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.850 ************************************ 00:10:15.850 END TEST nvmf_queue_depth 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.850 ************************************ 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.850 ************************************ 00:10:15.850 START TEST nvmf_target_multipath 00:10:15.850 ************************************ 00:10:15.850 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:16.110 * Looking for test storage... 00:10:16.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:16.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.110 --rc genhtml_branch_coverage=1 00:10:16.110 --rc genhtml_function_coverage=1 00:10:16.110 --rc genhtml_legend=1 00:10:16.110 --rc geninfo_all_blocks=1 00:10:16.110 --rc geninfo_unexecuted_blocks=1 00:10:16.110 00:10:16.110 ' 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:16.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.110 --rc genhtml_branch_coverage=1 00:10:16.110 --rc genhtml_function_coverage=1 00:10:16.110 --rc genhtml_legend=1 00:10:16.110 --rc geninfo_all_blocks=1 00:10:16.110 --rc geninfo_unexecuted_blocks=1 00:10:16.110 00:10:16.110 ' 00:10:16.110 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:16.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.110 --rc genhtml_branch_coverage=1 00:10:16.110 --rc genhtml_function_coverage=1 00:10:16.110 --rc genhtml_legend=1 00:10:16.110 --rc geninfo_all_blocks=1 00:10:16.111 --rc geninfo_unexecuted_blocks=1 00:10:16.111 00:10:16.111 ' 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:16.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.111 --rc genhtml_branch_coverage=1 00:10:16.111 --rc genhtml_function_coverage=1 00:10:16.111 --rc genhtml_legend=1 00:10:16.111 --rc geninfo_all_blocks=1 00:10:16.111 --rc geninfo_unexecuted_blocks=1 00:10:16.111 00:10:16.111 ' 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.111 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:16.111 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:16.112 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:16.112 Cannot find device "nvmf_init_br" 00:10:16.112 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:16.112 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:16.112 Cannot find device "nvmf_init_br2" 00:10:16.112 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:16.112 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:16.112 Cannot find device "nvmf_tgt_br" 00:10:16.112 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:10:16.112 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:16.112 Cannot find device "nvmf_tgt_br2" 00:10:16.112 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:10:16.112 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:16.371 Cannot find device "nvmf_init_br" 00:10:16.371 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:10:16.371 02:14:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:16.371 Cannot find device "nvmf_init_br2" 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:16.371 Cannot find device "nvmf_tgt_br" 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:16.371 Cannot find device "nvmf_tgt_br2" 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:16.371 Cannot find device "nvmf_br" 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:16.371 Cannot find device "nvmf_init_if" 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:16.371 Cannot find device "nvmf_init_if2" 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:16.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:16.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:16.371 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:16.630 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:16.630 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:16.631 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.138 ms 00:10:16.631 00:10:16.631 --- 10.0.0.3 ping statistics --- 00:10:16.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.631 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:16.631 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:16.631 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:10:16.631 00:10:16.631 --- 10.0.0.4 ping statistics --- 00:10:16.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.631 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:16.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:10:16.631 00:10:16.631 --- 10.0.0.1 ping statistics --- 00:10:16.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.631 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:16.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:10:16.631 00:10:16.631 --- 10.0.0.2 ping statistics --- 00:10:16.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.631 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=77633 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 77633 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 77633 ']' 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.631 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:16.631 [2024-11-08 02:14:18.467926] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:16.631 [2024-11-08 02:14:18.468038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.890 [2024-11-08 02:14:18.609016] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.890 [2024-11-08 02:14:18.651007] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.890 [2024-11-08 02:14:18.651075] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.890 [2024-11-08 02:14:18.651088] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.890 [2024-11-08 02:14:18.651098] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.890 [2024-11-08 02:14:18.651139] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.890 [2024-11-08 02:14:18.651243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.890 [2024-11-08 02:14:18.651314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.890 [2024-11-08 02:14:18.652040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.890 [2024-11-08 02:14:18.652095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.890 [2024-11-08 02:14:18.684699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:16.890 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.890 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:10:16.890 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:16.890 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:16.890 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:17.148 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.149 02:14:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:17.407 [2024-11-08 02:14:19.064985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.407 02:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:17.665 Malloc0 00:10:17.665 02:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:17.931 02:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:18.192 02:14:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:18.450 [2024-11-08 02:14:20.171673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:18.450 02:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:18.708 [2024-11-08 02:14:20.459989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:18.708 02:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:18.966 02:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:18.966 02:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.966 02:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:18.966 02:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.966 02:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:18.966 02:14:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:20.867 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:20.867 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:20.867 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=77715 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:21.125 02:14:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:21.125 [global] 00:10:21.125 thread=1 00:10:21.125 invalidate=1 00:10:21.125 rw=randrw 00:10:21.125 time_based=1 00:10:21.125 runtime=6 00:10:21.125 ioengine=libaio 00:10:21.125 direct=1 00:10:21.125 bs=4096 00:10:21.125 iodepth=128 00:10:21.125 norandommap=0 00:10:21.125 numjobs=1 00:10:21.125 00:10:21.125 verify_dump=1 00:10:21.125 verify_backlog=512 00:10:21.125 verify_state_save=0 00:10:21.125 do_verify=1 00:10:21.125 verify=crc32c-intel 00:10:21.125 [job0] 00:10:21.125 filename=/dev/nvme0n1 00:10:21.125 Could not set queue depth (nvme0n1) 00:10:21.125 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:21.125 fio-3.35 00:10:21.125 Starting 1 thread 00:10:22.074 02:14:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:22.346 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:22.604 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:22.604 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:22.604 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:22.604 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:22.604 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:22.604 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:22.604 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:22.605 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:22.605 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:22.605 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:22.605 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:22.605 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:22.605 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:22.863 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:23.122 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:23.122 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:23.122 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:23.122 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:23.122 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:23.122 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:23.122 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:23.122 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:23.122 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:23.122 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:23.122 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:23.122 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:23.122 02:14:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 77715 00:10:27.310 00:10:27.310 job0: (groupid=0, jobs=1): err= 0: pid=77736: Fri Nov 8 02:14:29 2024 00:10:27.310 read: IOPS=10.4k, BW=40.8MiB/s (42.8MB/s)(245MiB/6006msec) 00:10:27.310 slat (usec): min=7, max=6235, avg=56.92, stdev=220.00 00:10:27.310 clat (usec): min=1688, max=14888, avg=8337.40, stdev=1440.41 00:10:27.310 lat (usec): min=1698, max=14898, avg=8394.31, stdev=1444.37 00:10:27.310 clat percentiles (usec): 00:10:27.310 | 1.00th=[ 4359], 5.00th=[ 6390], 10.00th=[ 7111], 20.00th=[ 7570], 00:10:27.310 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8455], 00:10:27.310 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[11731], 00:10:27.310 | 99.00th=[12911], 99.50th=[13304], 99.90th=[13829], 99.95th=[14091], 00:10:27.310 | 99.99th=[14615] 00:10:27.310 bw ( KiB/s): min=11680, max=29824, per=51.74%, avg=21608.18, stdev=5557.21, samples=11 00:10:27.310 iops : min= 2920, max= 7456, avg=5402.00, stdev=1389.29, samples=11 00:10:27.310 write: IOPS=6141, BW=24.0MiB/s (25.2MB/s)(129MiB/5377msec); 0 zone resets 00:10:27.310 slat (usec): min=15, max=2686, avg=63.29, stdev=156.87 00:10:27.310 clat (usec): min=1798, max=14849, avg=7240.69, stdev=1299.80 00:10:27.310 lat (usec): min=1821, max=14873, avg=7303.99, stdev=1304.69 00:10:27.310 clat percentiles (usec): 00:10:27.310 | 1.00th=[ 3294], 5.00th=[ 4293], 10.00th=[ 5735], 20.00th=[ 6652], 00:10:27.310 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7570], 00:10:27.310 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 8717], 00:10:27.310 | 99.00th=[11076], 99.50th=[11731], 99.90th=[12780], 99.95th=[13435], 00:10:27.310 | 99.99th=[14091] 00:10:27.310 bw ( KiB/s): min=12056, max=29304, per=88.17%, avg=21661.55, stdev=5368.69, samples=11 00:10:27.310 iops : min= 3014, max= 7326, avg=5415.27, stdev=1342.28, samples=11 00:10:27.310 lat (msec) : 2=0.02%, 4=1.62%, 10=92.51%, 20=5.85% 00:10:27.310 cpu : usr=5.63%, sys=21.18%, ctx=5672, majf=0, minf=90 00:10:27.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:27.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:27.310 issued rwts: total=62705,33025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:27.310 00:10:27.310 Run status group 0 (all jobs): 00:10:27.310 READ: bw=40.8MiB/s (42.8MB/s), 40.8MiB/s-40.8MiB/s (42.8MB/s-42.8MB/s), io=245MiB (257MB), run=6006-6006msec 00:10:27.310 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=129MiB (135MB), run=5377-5377msec 00:10:27.310 00:10:27.310 Disk stats (read/write): 00:10:27.310 nvme0n1: ios=61847/32441, merge=0/0, ticks=494341/220635, in_queue=714976, util=98.58% 00:10:27.310 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:27.569 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=77817 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:27.836 02:14:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:27.836 [global] 00:10:27.836 thread=1 00:10:27.836 invalidate=1 00:10:27.836 rw=randrw 00:10:27.836 time_based=1 00:10:27.836 runtime=6 00:10:27.836 ioengine=libaio 00:10:27.836 direct=1 00:10:27.836 bs=4096 00:10:27.836 iodepth=128 00:10:27.836 norandommap=0 00:10:27.836 numjobs=1 00:10:27.836 00:10:27.836 verify_dump=1 00:10:27.836 verify_backlog=512 00:10:27.836 verify_state_save=0 00:10:27.836 do_verify=1 00:10:27.836 verify=crc32c-intel 00:10:27.836 [job0] 00:10:27.836 filename=/dev/nvme0n1 00:10:27.836 Could not set queue depth (nvme0n1) 00:10:28.098 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.098 fio-3.35 00:10:28.098 Starting 1 thread 00:10:29.033 02:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:29.291 02:14:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:29.549 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:29.549 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:29.549 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:29.549 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:29.549 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:29.549 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:29.549 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:29.549 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:29.549 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:29.549 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:29.549 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:29.549 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:29.549 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:29.808 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:30.067 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:30.067 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:30.067 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:30.067 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:30.067 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:30.067 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:30.067 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:30.067 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:30.067 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:30.067 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:30.067 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:30.067 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:30.067 02:14:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 77817 00:10:34.254 00:10:34.254 job0: (groupid=0, jobs=1): err= 0: pid=77838: Fri Nov 8 02:14:35 2024 00:10:34.254 read: IOPS=11.4k, BW=44.4MiB/s (46.6MB/s)(267MiB/6007msec) 00:10:34.254 slat (usec): min=4, max=6249, avg=42.84, stdev=187.60 00:10:34.254 clat (usec): min=1105, max=15495, avg=7630.12, stdev=1968.42 00:10:34.254 lat (usec): min=1118, max=15513, avg=7672.96, stdev=1983.25 00:10:34.254 clat percentiles (usec): 00:10:34.254 | 1.00th=[ 2835], 5.00th=[ 3884], 10.00th=[ 4686], 20.00th=[ 6194], 00:10:34.254 | 30.00th=[ 7177], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8160], 00:10:34.254 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[11076], 00:10:34.254 | 99.00th=[12911], 99.50th=[13304], 99.90th=[13960], 99.95th=[14353], 00:10:34.254 | 99.99th=[14877] 00:10:34.254 bw ( KiB/s): min=13688, max=39049, per=54.11%, avg=24611.00, stdev=8088.44, samples=11 00:10:34.254 iops : min= 3422, max= 9762, avg=6152.73, stdev=2022.07, samples=11 00:10:34.254 write: IOPS=6914, BW=27.0MiB/s (28.3MB/s)(145MiB/5357msec); 0 zone resets 00:10:34.254 slat (usec): min=15, max=3282, avg=54.61, stdev=133.28 00:10:34.254 clat (usec): min=1496, max=14135, avg=6518.01, stdev=1782.18 00:10:34.254 lat (usec): min=1522, max=14159, avg=6572.63, stdev=1796.41 00:10:34.254 clat percentiles (usec): 00:10:34.254 | 1.00th=[ 2540], 5.00th=[ 3326], 10.00th=[ 3818], 20.00th=[ 4621], 00:10:34.254 | 30.00th=[ 5538], 40.00th=[ 6718], 50.00th=[ 7111], 60.00th=[ 7373], 00:10:34.254 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8586], 00:10:34.254 | 99.00th=[10683], 99.50th=[11600], 99.90th=[12780], 99.95th=[13304], 00:10:34.254 | 99.99th=[13829] 00:10:34.254 bw ( KiB/s): min=14392, max=38315, per=89.12%, avg=24648.27, stdev=7818.32, samples=11 00:10:34.254 iops : min= 3598, max= 9578, avg=6162.00, stdev=1954.45, samples=11 00:10:34.254 lat (msec) : 2=0.27%, 4=7.67%, 10=87.38%, 20=4.68% 00:10:34.254 cpu : usr=6.83%, sys=23.41%, ctx=6035, majf=0, minf=94 00:10:34.254 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:34.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.254 issued rwts: total=68308,37039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.254 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.254 00:10:34.254 Run status group 0 (all jobs): 00:10:34.254 READ: bw=44.4MiB/s (46.6MB/s), 44.4MiB/s-44.4MiB/s (46.6MB/s-46.6MB/s), io=267MiB (280MB), run=6007-6007msec 00:10:34.254 WRITE: bw=27.0MiB/s (28.3MB/s), 27.0MiB/s-27.0MiB/s (28.3MB/s-28.3MB/s), io=145MiB (152MB), run=5357-5357msec 00:10:34.254 00:10:34.254 Disk stats (read/write): 00:10:34.254 nvme0n1: ios=67664/36177, merge=0/0, ticks=492312/218620, in_queue=710932, util=98.68% 00:10:34.254 02:14:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:34.254 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.254 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:34.254 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:34.254 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.254 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:34.254 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.254 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:34.254 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.512 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:34.512 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:34.512 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:34.512 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:34.512 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:34.512 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.771 rmmod nvme_tcp 00:10:34.771 rmmod nvme_fabrics 00:10:34.771 rmmod nvme_keyring 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 77633 ']' 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 77633 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 77633 ']' 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 77633 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77633 00:10:34.771 killing process with pid 77633 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77633' 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 77633 00:10:34.771 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 77633 00:10:35.030 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:35.030 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:35.030 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:35.030 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:35.030 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:35.030 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:35.030 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:35.031 00:10:35.031 real 0m19.174s 00:10:35.031 user 1m10.734s 00:10:35.031 sys 0m9.999s 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.031 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:35.031 ************************************ 00:10:35.031 END TEST nvmf_target_multipath 00:10:35.031 ************************************ 00:10:35.291 02:14:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:35.291 02:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:35.291 02:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.291 02:14:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.291 ************************************ 00:10:35.291 START TEST nvmf_zcopy 00:10:35.291 ************************************ 00:10:35.291 02:14:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:35.291 * Looking for test storage... 00:10:35.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:35.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.291 --rc genhtml_branch_coverage=1 00:10:35.291 --rc genhtml_function_coverage=1 00:10:35.291 --rc genhtml_legend=1 00:10:35.291 --rc geninfo_all_blocks=1 00:10:35.291 --rc geninfo_unexecuted_blocks=1 00:10:35.291 00:10:35.291 ' 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:35.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.291 --rc genhtml_branch_coverage=1 00:10:35.291 --rc genhtml_function_coverage=1 00:10:35.291 --rc genhtml_legend=1 00:10:35.291 --rc geninfo_all_blocks=1 00:10:35.291 --rc geninfo_unexecuted_blocks=1 00:10:35.291 00:10:35.291 ' 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:35.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.291 --rc genhtml_branch_coverage=1 00:10:35.291 --rc genhtml_function_coverage=1 00:10:35.291 --rc genhtml_legend=1 00:10:35.291 --rc geninfo_all_blocks=1 00:10:35.291 --rc geninfo_unexecuted_blocks=1 00:10:35.291 00:10:35.291 ' 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:35.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.291 --rc genhtml_branch_coverage=1 00:10:35.291 --rc genhtml_function_coverage=1 00:10:35.291 --rc genhtml_legend=1 00:10:35.291 --rc geninfo_all_blocks=1 00:10:35.291 --rc geninfo_unexecuted_blocks=1 00:10:35.291 00:10:35.291 ' 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:10:35.291 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.292 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:35.292 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:35.552 Cannot find device "nvmf_init_br" 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:35.552 Cannot find device "nvmf_init_br2" 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:35.552 Cannot find device "nvmf_tgt_br" 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.552 Cannot find device "nvmf_tgt_br2" 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:35.552 Cannot find device "nvmf_init_br" 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:35.552 Cannot find device "nvmf_init_br2" 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:35.552 Cannot find device "nvmf_tgt_br" 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:35.552 Cannot find device "nvmf_tgt_br2" 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:35.552 Cannot find device "nvmf_br" 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:35.552 Cannot find device "nvmf_init_if" 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:35.552 Cannot find device "nvmf_init_if2" 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.552 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.552 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:35.552 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:35.812 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:35.812 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:10:35.812 00:10:35.812 --- 10.0.0.3 ping statistics --- 00:10:35.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.812 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:35.812 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:35.812 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.029 ms 00:10:35.812 00:10:35.812 --- 10.0.0.4 ping statistics --- 00:10:35.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.812 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:35.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:35.812 00:10:35.812 --- 10.0.0.1 ping statistics --- 00:10:35.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.812 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:35.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:10:35.812 00:10:35.812 --- 10.0.0.2 ping statistics --- 00:10:35.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.812 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.812 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=78141 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 78141 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 78141 ']' 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.813 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:35.813 [2024-11-08 02:14:37.633364] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:35.813 [2024-11-08 02:14:37.633461] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.071 [2024-11-08 02:14:37.771289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.071 [2024-11-08 02:14:37.812600] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.071 [2024-11-08 02:14:37.812660] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.071 [2024-11-08 02:14:37.812683] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.071 [2024-11-08 02:14:37.812693] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.071 [2024-11-08 02:14:37.812701] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.071 [2024-11-08 02:14:37.812733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.071 [2024-11-08 02:14:37.845231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.071 [2024-11-08 02:14:37.931498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.071 [2024-11-08 02:14:37.947639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.071 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.330 malloc0 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:36.330 { 00:10:36.330 "params": { 00:10:36.330 "name": "Nvme$subsystem", 00:10:36.330 "trtype": "$TEST_TRANSPORT", 00:10:36.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:36.330 "adrfam": "ipv4", 00:10:36.330 "trsvcid": "$NVMF_PORT", 00:10:36.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:36.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:36.330 "hdgst": ${hdgst:-false}, 00:10:36.330 "ddgst": ${ddgst:-false} 00:10:36.330 }, 00:10:36.330 "method": "bdev_nvme_attach_controller" 00:10:36.330 } 00:10:36.330 EOF 00:10:36.330 )") 00:10:36.330 02:14:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:36.330 02:14:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:36.330 02:14:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:36.330 02:14:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:36.330 "params": { 00:10:36.330 "name": "Nvme1", 00:10:36.330 "trtype": "tcp", 00:10:36.330 "traddr": "10.0.0.3", 00:10:36.330 "adrfam": "ipv4", 00:10:36.330 "trsvcid": "4420", 00:10:36.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:36.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:36.331 "hdgst": false, 00:10:36.331 "ddgst": false 00:10:36.331 }, 00:10:36.331 "method": "bdev_nvme_attach_controller" 00:10:36.331 }' 00:10:36.331 [2024-11-08 02:14:38.048410] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:36.331 [2024-11-08 02:14:38.048511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78166 ] 00:10:36.331 [2024-11-08 02:14:38.185963] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.602 [2024-11-08 02:14:38.223192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.602 [2024-11-08 02:14:38.260232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:36.602 Running I/O for 10 seconds... 00:10:38.491 6286.00 IOPS, 49.11 MiB/s [2024-11-08T02:14:41.752Z] 6073.00 IOPS, 47.45 MiB/s [2024-11-08T02:14:42.687Z] 6127.00 IOPS, 47.87 MiB/s [2024-11-08T02:14:43.622Z] 6150.00 IOPS, 48.05 MiB/s [2024-11-08T02:14:44.558Z] 6162.60 IOPS, 48.15 MiB/s [2024-11-08T02:14:45.494Z] 6135.67 IOPS, 47.93 MiB/s [2024-11-08T02:14:46.430Z] 6122.14 IOPS, 47.83 MiB/s [2024-11-08T02:14:47.366Z] 6156.00 IOPS, 48.09 MiB/s [2024-11-08T02:14:48.740Z] 6209.78 IOPS, 48.51 MiB/s [2024-11-08T02:14:48.740Z] 6253.40 IOPS, 48.85 MiB/s 00:10:46.856 Latency(us) 00:10:46.856 [2024-11-08T02:14:48.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.856 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:46.856 Verification LBA range: start 0x0 length 0x1000 00:10:46.856 Nvme1n1 : 10.02 6256.66 48.88 0.00 0.00 20394.10 2800.17 31457.28 00:10:46.856 [2024-11-08T02:14:48.740Z] =================================================================================================================== 00:10:46.856 [2024-11-08T02:14:48.740Z] Total : 6256.66 48.88 0.00 0.00 20394.10 2800.17 31457.28 00:10:46.856 02:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=78283 00:10:46.856 02:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:46.856 02:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:46.856 02:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:46.856 02:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:46.856 02:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:46.856 02:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:46.856 02:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:46.856 02:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:46.856 { 00:10:46.856 "params": { 00:10:46.856 "name": "Nvme$subsystem", 00:10:46.856 "trtype": "$TEST_TRANSPORT", 00:10:46.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.856 "adrfam": "ipv4", 00:10:46.856 "trsvcid": "$NVMF_PORT", 00:10:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.856 "hdgst": ${hdgst:-false}, 00:10:46.856 "ddgst": ${ddgst:-false} 00:10:46.856 }, 00:10:46.856 "method": "bdev_nvme_attach_controller" 00:10:46.856 } 00:10:46.856 EOF 00:10:46.856 )") 00:10:46.856 [2024-11-08 02:14:48.519213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.856 [2024-11-08 02:14:48.519260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.856 02:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:46.856 02:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:46.856 02:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:46.856 02:14:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:46.856 "params": { 00:10:46.856 "name": "Nvme1", 00:10:46.856 "trtype": "tcp", 00:10:46.856 "traddr": "10.0.0.3", 00:10:46.856 "adrfam": "ipv4", 00:10:46.856 "trsvcid": "4420", 00:10:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.856 "hdgst": false, 00:10:46.856 "ddgst": false 00:10:46.856 }, 00:10:46.856 "method": "bdev_nvme_attach_controller" 00:10:46.856 }' 00:10:46.856 [2024-11-08 02:14:48.531167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.856 [2024-11-08 02:14:48.531199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.856 [2024-11-08 02:14:48.539181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.856 [2024-11-08 02:14:48.539210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.856 [2024-11-08 02:14:48.551170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.856 [2024-11-08 02:14:48.551198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.856 [2024-11-08 02:14:48.563166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.856 [2024-11-08 02:14:48.563192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.856 [2024-11-08 02:14:48.575182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.856 [2024-11-08 02:14:48.575211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.856 [2024-11-08 02:14:48.577660] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:46.856 [2024-11-08 02:14:48.577750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78283 ] 00:10:46.856 [2024-11-08 02:14:48.587178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.856 [2024-11-08 02:14:48.587207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.856 [2024-11-08 02:14:48.599202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.856 [2024-11-08 02:14:48.599242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.856 [2024-11-08 02:14:48.611229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.856 [2024-11-08 02:14:48.611264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.856 [2024-11-08 02:14:48.623198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.856 [2024-11-08 02:14:48.623232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.856 [2024-11-08 02:14:48.635195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.856 [2024-11-08 02:14:48.635224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.856 [2024-11-08 02:14:48.647198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.856 [2024-11-08 02:14:48.647229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.857 [2024-11-08 02:14:48.659199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.857 [2024-11-08 02:14:48.659228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.857 [2024-11-08 02:14:48.671206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.857 [2024-11-08 02:14:48.671236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.857 [2024-11-08 02:14:48.683201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.857 [2024-11-08 02:14:48.683230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.857 [2024-11-08 02:14:48.695205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.857 [2024-11-08 02:14:48.695234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.857 [2024-11-08 02:14:48.707204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.857 [2024-11-08 02:14:48.707231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.857 [2024-11-08 02:14:48.717602] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.857 [2024-11-08 02:14:48.719221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.857 [2024-11-08 02:14:48.719253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.857 [2024-11-08 02:14:48.727244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.857 [2024-11-08 02:14:48.727281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.857 [2024-11-08 02:14:48.735226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.857 [2024-11-08 02:14:48.735257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.115 [2024-11-08 02:14:48.747238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.115 [2024-11-08 02:14:48.747558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.115 [2024-11-08 02:14:48.752252] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.115 [2024-11-08 02:14:48.755229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.115 [2024-11-08 02:14:48.755259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.115 [2024-11-08 02:14:48.763222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.115 [2024-11-08 02:14:48.763250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.115 [2024-11-08 02:14:48.775257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.115 [2024-11-08 02:14:48.775582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.115 [2024-11-08 02:14:48.783253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.115 [2024-11-08 02:14:48.783290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.115 [2024-11-08 02:14:48.788422] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.115 [2024-11-08 02:14:48.795281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.115 [2024-11-08 02:14:48.795561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.115 [2024-11-08 02:14:48.803260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.115 [2024-11-08 02:14:48.803291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.115 [2024-11-08 02:14:48.811247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.115 [2024-11-08 02:14:48.811274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.115 [2024-11-08 02:14:48.819268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.115 [2024-11-08 02:14:48.819303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.115 [2024-11-08 02:14:48.827274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.115 [2024-11-08 02:14:48.827305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.835284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.835317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.843293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.843355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.851299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.851362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.859303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.859352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.867305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.867365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.875489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.875674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.883419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.883449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 Running I/O for 5 seconds... 00:10:47.116 [2024-11-08 02:14:48.891417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.891607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.904869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.905049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.914753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.914930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.929195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.929372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.938511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.938687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.952025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.952217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.962196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.962376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.973005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.973196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.116 [2024-11-08 02:14:48.985191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.116 [2024-11-08 02:14:48.985369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.002702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.002879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.016781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.016944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.033681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.033842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.043353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.043577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.057913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.057946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.067143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.067176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.079347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.079394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.096025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.096059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.112680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.112713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.121959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.122182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.133162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.133193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.145232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.145263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.154073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.154129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.166500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.166532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.178345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.178380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.194054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.194137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.211036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.211087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.221355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.221399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.232919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.233194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.244590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.375 [2024-11-08 02:14:49.244767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.375 [2024-11-08 02:14:49.256242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.256438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.273937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.273970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.291817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.292037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.307581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.307634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.317612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.317805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.330325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.330362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.341347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.341381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.352625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.352803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.364199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.364250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.379977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.380203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.395851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.396031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.406146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.406194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.418715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.418748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.429810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.429988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.446730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.446764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.464425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.464580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.475049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.475086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.489947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.489980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.500279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.635 [2024-11-08 02:14:49.500311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.635 [2024-11-08 02:14:49.515736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.515912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.531824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.531858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.541855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.542049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.555275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.555312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.565359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.565393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.579299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.579349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.596379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.596412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.612305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.612338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.621603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.621636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.636816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.636851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.646146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.646220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.657002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.657034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.669026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.669057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.678471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.678549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.690156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.690220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.705254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.705288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.714299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.714348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.730266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.730313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.739729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.739775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.752187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.752234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.763478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.763511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.895 [2024-11-08 02:14:49.772592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.895 [2024-11-08 02:14:49.772667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.788400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.788496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.805312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.805374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.821214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.821280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.829997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.830053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.842003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.842061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.851468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.851516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.866686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.866748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.881589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.881647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 11763.00 IOPS, 91.90 MiB/s [2024-11-08T02:14:50.038Z] [2024-11-08 02:14:49.897853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.897912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.914443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.914528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.930915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.930971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.948336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.948410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.964244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.964304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.981902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.981949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:49.996338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:49.996388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:50.012742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:50.012791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.154 [2024-11-08 02:14:50.023307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.154 [2024-11-08 02:14:50.023343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.413 [2024-11-08 02:14:50.037809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.413 [2024-11-08 02:14:50.037856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.413 [2024-11-08 02:14:50.048638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.413 [2024-11-08 02:14:50.048684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.413 [2024-11-08 02:14:50.063098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.413 [2024-11-08 02:14:50.063149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.413 [2024-11-08 02:14:50.073894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.413 [2024-11-08 02:14:50.073941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.413 [2024-11-08 02:14:50.090899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.413 [2024-11-08 02:14:50.090946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.413 [2024-11-08 02:14:50.107495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.413 [2024-11-08 02:14:50.107543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.413 [2024-11-08 02:14:50.123664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.413 [2024-11-08 02:14:50.123711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.413 [2024-11-08 02:14:50.134676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.413 [2024-11-08 02:14:50.134723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.413 [2024-11-08 02:14:50.150349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.413 [2024-11-08 02:14:50.150398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.413 [2024-11-08 02:14:50.168171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.414 [2024-11-08 02:14:50.168217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.414 [2024-11-08 02:14:50.184419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.414 [2024-11-08 02:14:50.184467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.414 [2024-11-08 02:14:50.201223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.414 [2024-11-08 02:14:50.201271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.414 [2024-11-08 02:14:50.212510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.414 [2024-11-08 02:14:50.212557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.414 [2024-11-08 02:14:50.220986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.414 [2024-11-08 02:14:50.221032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.414 [2024-11-08 02:14:50.232751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.414 [2024-11-08 02:14:50.232798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.414 [2024-11-08 02:14:50.243616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.414 [2024-11-08 02:14:50.243663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.414 [2024-11-08 02:14:50.260756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.414 [2024-11-08 02:14:50.260804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.414 [2024-11-08 02:14:50.276081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.414 [2024-11-08 02:14:50.276140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.414 [2024-11-08 02:14:50.285771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.414 [2024-11-08 02:14:50.285806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.298131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.298193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.309527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.309591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.321205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.321253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.333036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.333084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.348061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.348117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.363741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.363789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.372986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.373033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.388404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.388454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.397680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.397728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.413095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.413168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.423146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.423181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.438097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.438196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.447916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.447964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.463171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.463207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.472062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.472136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.489693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.489741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.499522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.499599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.513562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.513611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.523606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.523654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.538046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.538095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.673 [2024-11-08 02:14:50.546894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.673 [2024-11-08 02:14:50.546941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.562016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.562064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.571421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.571485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.582824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.582872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.594876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.594924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.604142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.604199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.616266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.616316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.628336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.628383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.644587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.644634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.660828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.660876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.670216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.670263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.684780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.684829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.693837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.693884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.710080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.710193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.727758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.727807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.743839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.743887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.753019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.753066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.765589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.765636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.775195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.775245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.785409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.785457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.795541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.795602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.932 [2024-11-08 02:14:50.805467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.932 [2024-11-08 02:14:50.805515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.819901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.819947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.829370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.829420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.840704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.840752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.858599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.858646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.875189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.875226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.884337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.884385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 12008.50 IOPS, 93.82 MiB/s [2024-11-08T02:14:51.076Z] [2024-11-08 02:14:50.895588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.895637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.907736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.907783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.917150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.917209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.929605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.929653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.945667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.945714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.963814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.963863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.974211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.974261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.984701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.984749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:50.995051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:50.995115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:51.007234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:51.007285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:51.016385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:51.016433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:51.028844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:51.028891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:51.046484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:51.046532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.192 [2024-11-08 02:14:51.062525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.192 [2024-11-08 02:14:51.062573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.451 [2024-11-08 02:14:51.079324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.451 [2024-11-08 02:14:51.079390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.451 [2024-11-08 02:14:51.095759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.451 [2024-11-08 02:14:51.095805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.451 [2024-11-08 02:14:51.105645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.451 [2024-11-08 02:14:51.105693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.451 [2024-11-08 02:14:51.119917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.451 [2024-11-08 02:14:51.119991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.451 [2024-11-08 02:14:51.135086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.451 [2024-11-08 02:14:51.135162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.451 [2024-11-08 02:14:51.144134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.451 [2024-11-08 02:14:51.144196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.451 [2024-11-08 02:14:51.160533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.451 [2024-11-08 02:14:51.160583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.451 [2024-11-08 02:14:51.178418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.451 [2024-11-08 02:14:51.178507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.451 [2024-11-08 02:14:51.188060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.451 [2024-11-08 02:14:51.188152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.452 [2024-11-08 02:14:51.202014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.452 [2024-11-08 02:14:51.202073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.452 [2024-11-08 02:14:51.210458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.452 [2024-11-08 02:14:51.210532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.452 [2024-11-08 02:14:51.225624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.452 [2024-11-08 02:14:51.225696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.452 [2024-11-08 02:14:51.241703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.452 [2024-11-08 02:14:51.241773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.452 [2024-11-08 02:14:51.250824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.452 [2024-11-08 02:14:51.250892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.452 [2024-11-08 02:14:51.261567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.452 [2024-11-08 02:14:51.261638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.452 [2024-11-08 02:14:51.278574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.452 [2024-11-08 02:14:51.278648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.452 [2024-11-08 02:14:51.295657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.452 [2024-11-08 02:14:51.295723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.452 [2024-11-08 02:14:51.305327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.452 [2024-11-08 02:14:51.305377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.452 [2024-11-08 02:14:51.315482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.452 [2024-11-08 02:14:51.315544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.452 [2024-11-08 02:14:51.325810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.452 [2024-11-08 02:14:51.325847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.714 [2024-11-08 02:14:51.341119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.714 [2024-11-08 02:14:51.341179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.714 [2024-11-08 02:14:51.357474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.714 [2024-11-08 02:14:51.357549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.714 [2024-11-08 02:14:51.367372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.714 [2024-11-08 02:14:51.367438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.714 [2024-11-08 02:14:51.381848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.714 [2024-11-08 02:14:51.381897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.714 [2024-11-08 02:14:51.399053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.714 [2024-11-08 02:14:51.399090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.714 [2024-11-08 02:14:51.409338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.714 [2024-11-08 02:14:51.409389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.714 [2024-11-08 02:14:51.425532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.714 [2024-11-08 02:14:51.425583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.714 [2024-11-08 02:14:51.436155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.714 [2024-11-08 02:14:51.436246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.714 [2024-11-08 02:14:51.451622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.714 [2024-11-08 02:14:51.451670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.714 [2024-11-08 02:14:51.468149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.714 [2024-11-08 02:14:51.468238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.714 [2024-11-08 02:14:51.478593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.714 [2024-11-08 02:14:51.478655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.714 [2024-11-08 02:14:51.490804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.714 [2024-11-08 02:14:51.490862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.715 [2024-11-08 02:14:51.502729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.715 [2024-11-08 02:14:51.502778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.715 [2024-11-08 02:14:51.518267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.715 [2024-11-08 02:14:51.518301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.715 [2024-11-08 02:14:51.528703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.715 [2024-11-08 02:14:51.528751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.715 [2024-11-08 02:14:51.543792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.715 [2024-11-08 02:14:51.543842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.715 [2024-11-08 02:14:51.559961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.715 [2024-11-08 02:14:51.560010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.715 [2024-11-08 02:14:51.569924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.715 [2024-11-08 02:14:51.569970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.715 [2024-11-08 02:14:51.580818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.715 [2024-11-08 02:14:51.580866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.715 [2024-11-08 02:14:51.590667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.715 [2024-11-08 02:14:51.590714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.974 [2024-11-08 02:14:51.606322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.974 [2024-11-08 02:14:51.606372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.974 [2024-11-08 02:14:51.615581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.974 [2024-11-08 02:14:51.615629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.974 [2024-11-08 02:14:51.631111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.974 [2024-11-08 02:14:51.631159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.974 [2024-11-08 02:14:51.646341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.974 [2024-11-08 02:14:51.646377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.974 [2024-11-08 02:14:51.656454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.974 [2024-11-08 02:14:51.656492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.974 [2024-11-08 02:14:51.667576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.974 [2024-11-08 02:14:51.667623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.974 [2024-11-08 02:14:51.678087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.974 [2024-11-08 02:14:51.678177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.974 [2024-11-08 02:14:51.689187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.974 [2024-11-08 02:14:51.689250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.974 [2024-11-08 02:14:51.702284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.974 [2024-11-08 02:14:51.702348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.974 [2024-11-08 02:14:51.718741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.974 [2024-11-08 02:14:51.718789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.974 [2024-11-08 02:14:51.735129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.974 [2024-11-08 02:14:51.735160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.974 [2024-11-08 02:14:51.745265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.974 [2024-11-08 02:14:51.745302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.974 [2024-11-08 02:14:51.757814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.974 [2024-11-08 02:14:51.757861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.975 [2024-11-08 02:14:51.773007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.975 [2024-11-08 02:14:51.773049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.975 [2024-11-08 02:14:51.790343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.975 [2024-11-08 02:14:51.790393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.975 [2024-11-08 02:14:51.800821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.975 [2024-11-08 02:14:51.800870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.975 [2024-11-08 02:14:51.812647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.975 [2024-11-08 02:14:51.812695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.975 [2024-11-08 02:14:51.823818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.975 [2024-11-08 02:14:51.823866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.975 [2024-11-08 02:14:51.841114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.975 [2024-11-08 02:14:51.841174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:51.858850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:51.858897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:51.868735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:51.868782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:51.878889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:51.878937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:51.888739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:51.888787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 11928.67 IOPS, 93.19 MiB/s [2024-11-08T02:14:52.118Z] [2024-11-08 02:14:51.898831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:51.898879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:51.908904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:51.908952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:51.923648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:51.923696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:51.934362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:51.934397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:51.946184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:51.946257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:51.957200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:51.957296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:51.973960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:51.974016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:51.985772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:51.985822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:51.994972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:51.995031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:52.006303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:52.006364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:52.016290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:52.016339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:52.030321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:52.030356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:52.039680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.234 [2024-11-08 02:14:52.039714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.234 [2024-11-08 02:14:52.050048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.235 [2024-11-08 02:14:52.050081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.235 [2024-11-08 02:14:52.062124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.235 [2024-11-08 02:14:52.062184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.235 [2024-11-08 02:14:52.071706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.235 [2024-11-08 02:14:52.071754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.235 [2024-11-08 02:14:52.083810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.235 [2024-11-08 02:14:52.083858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.235 [2024-11-08 02:14:52.095335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.235 [2024-11-08 02:14:52.095384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.235 [2024-11-08 02:14:52.111940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.235 [2024-11-08 02:14:52.112004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.127928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.127975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.136735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.136782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.149174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.149221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.158255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.158303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.174635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.174683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.184296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.184347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.200248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.200313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.218373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.218438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.228664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.228712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.242876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.242926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.257612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.257660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.266172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.266199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.277915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.277962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.287440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.287488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.297021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.297068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.306768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.306815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.316753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.316800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.326509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.494 [2024-11-08 02:14:52.326556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.494 [2024-11-08 02:14:52.335990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.495 [2024-11-08 02:14:52.336038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.495 [2024-11-08 02:14:52.345650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.495 [2024-11-08 02:14:52.345696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.495 [2024-11-08 02:14:52.355197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.495 [2024-11-08 02:14:52.355247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.495 [2024-11-08 02:14:52.364597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.495 [2024-11-08 02:14:52.364660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.495 [2024-11-08 02:14:52.375212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.495 [2024-11-08 02:14:52.375277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.387937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.387985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.397002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.397050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.410936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.410991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.425766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.425814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.441487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.441535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.458831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.458880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.469002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.469050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.479249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.479299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.488952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.489002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.503700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.503748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.513788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.513836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.525393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.525443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.542106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.542180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.551902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.551949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.563517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.563565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.577268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.577307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.592935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.592984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.602506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.602553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.613720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.613752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.754 [2024-11-08 02:14:52.624099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.754 [2024-11-08 02:14:52.624318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.638513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.638545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.647204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.647239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.662030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.662062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.677381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.677575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.695700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.695732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.710720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.710753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.721888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.721920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.730426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.730458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.741670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.741702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.758876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.758909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.777206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.777241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.787519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.787691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.801494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.801529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.810579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.810751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.824223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.824422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.833588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.833765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.843891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.844067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.854072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.854282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.864604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.864796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.874787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.874963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.014 [2024-11-08 02:14:52.889299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.014 [2024-11-08 02:14:52.889491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 12027.25 IOPS, 93.96 MiB/s [2024-11-08T02:14:53.158Z] [2024-11-08 02:14:52.905212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:52.905391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:52.923924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:52.924149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:52.934272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:52.934451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:52.946001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:52.946191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:52.954793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:52.954969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:52.970197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:52.970376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:52.979617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:52.979778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:52.993830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:52.994006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:53.010733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:53.010914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:53.021966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:53.022167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:53.032812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:53.032985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:53.042750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:53.043081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:53.057370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:53.057428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:53.072722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:53.072991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:53.082505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:53.082729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:53.096491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:53.096725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:53.112460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:53.112676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:53.130212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:53.130548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.274 [2024-11-08 02:14:53.146850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.274 [2024-11-08 02:14:53.147145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.533 [2024-11-08 02:14:53.162329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.533 [2024-11-08 02:14:53.162624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.533 [2024-11-08 02:14:53.171583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.171819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.182767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.182931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.194160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.194346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.209024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.209269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.218411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.218642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.233362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.233556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.242185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.242359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.259526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.259747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.276805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.277057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.287100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.287163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.297128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.297192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.311053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.311094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.321020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.321058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.335341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.335375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.344501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.344533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.356662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.356694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.374170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.374212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.390031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.390249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.400099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.400160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.534 [2024-11-08 02:14:53.411317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.534 [2024-11-08 02:14:53.411367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.426372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.426404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.443715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.443905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.458952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.459133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.474962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.475145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.491527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.491756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.501588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.501783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.516507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.516718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.527088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.527252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.542404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.542549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.558421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.558566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.568439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.568611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.580810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.580988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.591646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.591809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.602540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.602705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.615144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.615317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.624625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.624819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.638448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.638621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.653895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.654071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.793 [2024-11-08 02:14:53.663162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.793 [2024-11-08 02:14:53.663322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.053 [2024-11-08 02:14:53.679648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.053 [2024-11-08 02:14:53.679826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.053 [2024-11-08 02:14:53.689840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.053 [2024-11-08 02:14:53.690023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.053 [2024-11-08 02:14:53.705308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.053 [2024-11-08 02:14:53.705345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.053 [2024-11-08 02:14:53.720735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.053 [2024-11-08 02:14:53.720912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.053 [2024-11-08 02:14:53.730493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.053 [2024-11-08 02:14:53.730527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.053 [2024-11-08 02:14:53.746164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.053 [2024-11-08 02:14:53.746207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.053 [2024-11-08 02:14:53.764182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.053 [2024-11-08 02:14:53.764240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.053 [2024-11-08 02:14:53.774247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.053 [2024-11-08 02:14:53.774306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.053 [2024-11-08 02:14:53.790084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.053 [2024-11-08 02:14:53.790188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.053 [2024-11-08 02:14:53.805317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.054 [2024-11-08 02:14:53.805369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.054 [2024-11-08 02:14:53.814380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.054 [2024-11-08 02:14:53.814429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.054 [2024-11-08 02:14:53.825486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.054 [2024-11-08 02:14:53.825534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.054 [2024-11-08 02:14:53.835573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.054 [2024-11-08 02:14:53.835887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.054 [2024-11-08 02:14:53.850315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.054 [2024-11-08 02:14:53.850560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.054 [2024-11-08 02:14:53.859184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.054 [2024-11-08 02:14:53.859225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.054 [2024-11-08 02:14:53.874572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.054 [2024-11-08 02:14:53.874630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.054 [2024-11-08 02:14:53.886205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.054 [2024-11-08 02:14:53.886248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.054 12011.80 IOPS, 93.84 MiB/s 00:10:52.054 Latency(us) 00:10:52.054 [2024-11-08T02:14:53.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.054 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:52.054 Nvme1n1 : 5.01 12021.24 93.92 0.00 0.00 10638.23 4051.32 21924.77 00:10:52.054 [2024-11-08T02:14:53.938Z] =================================================================================================================== 00:10:52.054 [2024-11-08T02:14:53.938Z] Total : 12021.24 93.92 0.00 0.00 10638.23 4051.32 21924.77 00:10:52.054 [2024-11-08 02:14:53.897150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.054 [2024-11-08 02:14:53.897215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.054 [2024-11-08 02:14:53.905109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.054 [2024-11-08 02:14:53.905177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.054 [2024-11-08 02:14:53.917177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.054 [2024-11-08 02:14:53.917233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.054 [2024-11-08 02:14:53.929171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.054 [2024-11-08 02:14:53.929230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.313 [2024-11-08 02:14:53.941166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.313 [2024-11-08 02:14:53.941216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.313 [2024-11-08 02:14:53.953162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.313 [2024-11-08 02:14:53.953211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.313 [2024-11-08 02:14:53.965187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.313 [2024-11-08 02:14:53.965247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.313 [2024-11-08 02:14:53.977153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.313 [2024-11-08 02:14:53.977206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.313 [2024-11-08 02:14:53.989173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.313 [2024-11-08 02:14:53.989232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.313 [2024-11-08 02:14:54.001163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.313 [2024-11-08 02:14:54.001204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.313 [2024-11-08 02:14:54.013183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.313 [2024-11-08 02:14:54.013234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.313 [2024-11-08 02:14:54.025144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.313 [2024-11-08 02:14:54.025188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.313 [2024-11-08 02:14:54.033103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.313 [2024-11-08 02:14:54.033154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.313 [2024-11-08 02:14:54.045170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.313 [2024-11-08 02:14:54.045230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.313 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (78283) - No such process 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 78283 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:52.313 delay0 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.313 02:14:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:52.572 [2024-11-08 02:14:54.242366] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:59.138 Initializing NVMe Controllers 00:10:59.138 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:59.138 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:59.138 Initialization complete. Launching workers. 00:10:59.138 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 93 00:10:59.138 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 380, failed to submit 33 00:10:59.138 success 247, unsuccessful 133, failed 0 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.138 rmmod nvme_tcp 00:10:59.138 rmmod nvme_fabrics 00:10:59.138 rmmod nvme_keyring 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 78141 ']' 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 78141 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 78141 ']' 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 78141 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78141 00:10:59.138 killing process with pid 78141 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78141' 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 78141 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 78141 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:59.138 00:10:59.138 real 0m23.868s 00:10:59.138 user 0m38.991s 00:10:59.138 sys 0m6.649s 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.138 ************************************ 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.138 END TEST nvmf_zcopy 00:10:59.138 ************************************ 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:59.138 ************************************ 00:10:59.138 START TEST nvmf_nmic 00:10:59.138 ************************************ 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:59.138 * Looking for test storage... 00:10:59.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:59.138 02:15:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:59.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.398 --rc genhtml_branch_coverage=1 00:10:59.398 --rc genhtml_function_coverage=1 00:10:59.398 --rc genhtml_legend=1 00:10:59.398 --rc geninfo_all_blocks=1 00:10:59.398 --rc geninfo_unexecuted_blocks=1 00:10:59.398 00:10:59.398 ' 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:59.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.398 --rc genhtml_branch_coverage=1 00:10:59.398 --rc genhtml_function_coverage=1 00:10:59.398 --rc genhtml_legend=1 00:10:59.398 --rc geninfo_all_blocks=1 00:10:59.398 --rc geninfo_unexecuted_blocks=1 00:10:59.398 00:10:59.398 ' 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:59.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.398 --rc genhtml_branch_coverage=1 00:10:59.398 --rc genhtml_function_coverage=1 00:10:59.398 --rc genhtml_legend=1 00:10:59.398 --rc geninfo_all_blocks=1 00:10:59.398 --rc geninfo_unexecuted_blocks=1 00:10:59.398 00:10:59.398 ' 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:59.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.398 --rc genhtml_branch_coverage=1 00:10:59.398 --rc genhtml_function_coverage=1 00:10:59.398 --rc genhtml_legend=1 00:10:59.398 --rc geninfo_all_blocks=1 00:10:59.398 --rc geninfo_unexecuted_blocks=1 00:10:59.398 00:10:59.398 ' 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.398 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.399 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:59.399 Cannot find device "nvmf_init_br" 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:59.399 Cannot find device "nvmf_init_br2" 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:59.399 Cannot find device "nvmf_tgt_br" 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:59.399 Cannot find device "nvmf_tgt_br2" 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:59.399 Cannot find device "nvmf_init_br" 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:59.399 Cannot find device "nvmf_init_br2" 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:59.399 Cannot find device "nvmf_tgt_br" 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:59.399 Cannot find device "nvmf_tgt_br2" 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:59.399 Cannot find device "nvmf_br" 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:59.399 Cannot find device "nvmf_init_if" 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:59.399 Cannot find device "nvmf_init_if2" 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:59.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:59.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:59.399 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:59.658 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:59.658 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:10:59.658 00:10:59.658 --- 10.0.0.3 ping statistics --- 00:10:59.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.658 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:59.658 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:59.658 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:10:59.658 00:10:59.658 --- 10.0.0.4 ping statistics --- 00:10:59.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.658 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:59.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:59.658 00:10:59.658 --- 10.0.0.1 ping statistics --- 00:10:59.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.658 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:59.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:10:59.658 00:10:59.658 --- 10.0.0.2 ping statistics --- 00:10:59.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.658 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=78658 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 78658 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 78658 ']' 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.658 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.917 [2024-11-08 02:15:01.553166] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:59.917 [2024-11-08 02:15:01.554014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.917 [2024-11-08 02:15:01.698735] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.917 [2024-11-08 02:15:01.745751] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.917 [2024-11-08 02:15:01.746321] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.917 [2024-11-08 02:15:01.746609] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.917 [2024-11-08 02:15:01.746927] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.917 [2024-11-08 02:15:01.747185] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.917 [2024-11-08 02:15:01.747565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.917 [2024-11-08 02:15:01.747717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.917 [2024-11-08 02:15:01.747813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.917 [2024-11-08 02:15:01.748482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.917 [2024-11-08 02:15:01.782656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.175 [2024-11-08 02:15:01.885640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.175 Malloc0 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.175 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.176 [2024-11-08 02:15:01.943149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:00.176 test case1: single bdev can't be used in multiple subsystems 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.176 [2024-11-08 02:15:01.966936] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:00.176 [2024-11-08 02:15:01.967294] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:00.176 [2024-11-08 02:15:01.967404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.176 request: 00:11:00.176 { 00:11:00.176 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:00.176 "namespace": { 00:11:00.176 "bdev_name": "Malloc0", 00:11:00.176 "no_auto_visible": false 00:11:00.176 }, 00:11:00.176 "method": "nvmf_subsystem_add_ns", 00:11:00.176 "req_id": 1 00:11:00.176 } 00:11:00.176 Got JSON-RPC error response 00:11:00.176 response: 00:11:00.176 { 00:11:00.176 "code": -32602, 00:11:00.176 "message": "Invalid parameters" 00:11:00.176 } 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:00.176 Adding namespace failed - expected result. 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:00.176 test case2: host connect to nvmf target in multiple paths 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:00.176 [2024-11-08 02:15:01.979127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.176 02:15:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:00.434 02:15:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:11:00.434 02:15:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.434 02:15:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:00.434 02:15:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.434 02:15:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:00.434 02:15:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:02.965 02:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:02.965 02:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:02.965 02:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.965 02:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:02.965 02:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.965 02:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:02.965 02:15:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:02.965 [global] 00:11:02.965 thread=1 00:11:02.965 invalidate=1 00:11:02.965 rw=write 00:11:02.965 time_based=1 00:11:02.965 runtime=1 00:11:02.965 ioengine=libaio 00:11:02.965 direct=1 00:11:02.965 bs=4096 00:11:02.965 iodepth=1 00:11:02.965 norandommap=0 00:11:02.965 numjobs=1 00:11:02.965 00:11:02.965 verify_dump=1 00:11:02.965 verify_backlog=512 00:11:02.965 verify_state_save=0 00:11:02.965 do_verify=1 00:11:02.965 verify=crc32c-intel 00:11:02.965 [job0] 00:11:02.965 filename=/dev/nvme0n1 00:11:02.965 Could not set queue depth (nvme0n1) 00:11:02.965 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.965 fio-3.35 00:11:02.965 Starting 1 thread 00:11:03.920 00:11:03.920 job0: (groupid=0, jobs=1): err= 0: pid=78741: Fri Nov 8 02:15:05 2024 00:11:03.920 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:03.920 slat (nsec): min=11980, max=71223, avg=14614.04, stdev=4775.71 00:11:03.920 clat (usec): min=133, max=450, avg=177.01, stdev=23.85 00:11:03.920 lat (usec): min=145, max=464, avg=191.63, stdev=24.91 00:11:03.920 clat percentiles (usec): 00:11:03.920 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:11:03.920 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:11:03.920 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 215], 00:11:03.920 | 99.00th=[ 245], 99.50th=[ 281], 99.90th=[ 392], 99.95th=[ 420], 00:11:03.920 | 99.99th=[ 449] 00:11:03.920 write: IOPS=3098, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1001msec); 0 zone resets 00:11:03.920 slat (nsec): min=14177, max=71096, avg=20740.47, stdev=5974.44 00:11:03.920 clat (usec): min=81, max=344, avg=108.47, stdev=17.52 00:11:03.920 lat (usec): min=99, max=366, avg=129.21, stdev=19.28 00:11:03.920 clat percentiles (usec): 00:11:03.920 | 1.00th=[ 85], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 96], 00:11:03.920 | 30.00th=[ 99], 40.00th=[ 102], 50.00th=[ 104], 60.00th=[ 109], 00:11:03.920 | 70.00th=[ 115], 80.00th=[ 121], 90.00th=[ 131], 95.00th=[ 141], 00:11:03.920 | 99.00th=[ 161], 99.50th=[ 174], 99.90th=[ 258], 99.95th=[ 273], 00:11:03.920 | 99.99th=[ 347] 00:11:03.920 bw ( KiB/s): min=12656, max=12656, per=100.00%, avg=12656.00, stdev= 0.00, samples=1 00:11:03.920 iops : min= 3164, max= 3164, avg=3164.00, stdev= 0.00, samples=1 00:11:03.920 lat (usec) : 100=17.65%, 250=81.86%, 500=0.49% 00:11:03.920 cpu : usr=2.50%, sys=8.40%, ctx=6174, majf=0, minf=5 00:11:03.920 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.920 issued rwts: total=3072,3102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.920 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.920 00:11:03.920 Run status group 0 (all jobs): 00:11:03.920 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:03.920 WRITE: bw=12.1MiB/s (12.7MB/s), 12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s), io=12.1MiB (12.7MB), run=1001-1001msec 00:11:03.920 00:11:03.920 Disk stats (read/write): 00:11:03.920 nvme0n1: ios=2647/3072, merge=0/0, ticks=512/383, in_queue=895, util=91.48% 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:03.920 rmmod nvme_tcp 00:11:03.920 rmmod nvme_fabrics 00:11:03.920 rmmod nvme_keyring 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 78658 ']' 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 78658 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 78658 ']' 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 78658 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.920 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78658 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:04.178 killing process with pid 78658 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78658' 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 78658 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 78658 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:04.178 02:15:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:04.178 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:04.178 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:04.178 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:04.178 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:04.178 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:04.178 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:04.178 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:04.436 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:04.436 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:04.436 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:04.436 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:04.436 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:04.436 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.436 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.436 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.436 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:04.436 ************************************ 00:11:04.436 END TEST nvmf_nmic 00:11:04.436 ************************************ 00:11:04.436 00:11:04.437 real 0m5.357s 00:11:04.437 user 0m15.620s 00:11:04.437 sys 0m2.338s 00:11:04.437 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.437 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:04.437 02:15:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:04.437 02:15:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:04.437 02:15:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.437 02:15:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:04.437 ************************************ 00:11:04.437 START TEST nvmf_fio_target 00:11:04.437 ************************************ 00:11:04.437 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:04.696 * Looking for test storage... 00:11:04.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:04.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.696 --rc genhtml_branch_coverage=1 00:11:04.696 --rc genhtml_function_coverage=1 00:11:04.696 --rc genhtml_legend=1 00:11:04.696 --rc geninfo_all_blocks=1 00:11:04.696 --rc geninfo_unexecuted_blocks=1 00:11:04.696 00:11:04.696 ' 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:04.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.696 --rc genhtml_branch_coverage=1 00:11:04.696 --rc genhtml_function_coverage=1 00:11:04.696 --rc genhtml_legend=1 00:11:04.696 --rc geninfo_all_blocks=1 00:11:04.696 --rc geninfo_unexecuted_blocks=1 00:11:04.696 00:11:04.696 ' 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:04.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.696 --rc genhtml_branch_coverage=1 00:11:04.696 --rc genhtml_function_coverage=1 00:11:04.696 --rc genhtml_legend=1 00:11:04.696 --rc geninfo_all_blocks=1 00:11:04.696 --rc geninfo_unexecuted_blocks=1 00:11:04.696 00:11:04.696 ' 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:04.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.696 --rc genhtml_branch_coverage=1 00:11:04.696 --rc genhtml_function_coverage=1 00:11:04.696 --rc genhtml_legend=1 00:11:04.696 --rc geninfo_all_blocks=1 00:11:04.696 --rc geninfo_unexecuted_blocks=1 00:11:04.696 00:11:04.696 ' 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:04.696 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.697 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:04.697 Cannot find device "nvmf_init_br" 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:04.697 Cannot find device "nvmf_init_br2" 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:04.697 Cannot find device "nvmf_tgt_br" 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:04.697 Cannot find device "nvmf_tgt_br2" 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:04.697 Cannot find device "nvmf_init_br" 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:04.697 Cannot find device "nvmf_init_br2" 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:04.697 Cannot find device "nvmf_tgt_br" 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:04.697 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:04.957 Cannot find device "nvmf_tgt_br2" 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:04.957 Cannot find device "nvmf_br" 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:04.957 Cannot find device "nvmf_init_if" 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:04.957 Cannot find device "nvmf_init_if2" 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:04.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:04.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:04.957 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:04.957 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:11:04.957 00:11:04.957 --- 10.0.0.3 ping statistics --- 00:11:04.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.957 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:11:04.957 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:05.216 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:05.216 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:11:05.216 00:11:05.216 --- 10.0.0.4 ping statistics --- 00:11:05.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.216 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:05.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:11:05.216 00:11:05.216 --- 10.0.0.1 ping statistics --- 00:11:05.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.216 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:05.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:11:05.216 00:11:05.216 --- 10.0.0.2 ping statistics --- 00:11:05.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.216 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=78971 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 78971 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 78971 ']' 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:05.216 02:15:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.216 [2024-11-08 02:15:06.943800] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:05.216 [2024-11-08 02:15:06.944095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.216 [2024-11-08 02:15:07.086771] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.475 [2024-11-08 02:15:07.125187] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.475 [2024-11-08 02:15:07.125447] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.475 [2024-11-08 02:15:07.125631] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.475 [2024-11-08 02:15:07.125909] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.475 [2024-11-08 02:15:07.125951] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.475 [2024-11-08 02:15:07.126248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.475 [2024-11-08 02:15:07.126334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.475 [2024-11-08 02:15:07.126418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.475 [2024-11-08 02:15:07.126419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.476 [2024-11-08 02:15:07.155277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.476 02:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:05.476 02:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:05.476 02:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:05.476 02:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:05.476 02:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.476 02:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.476 02:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:05.735 [2024-11-08 02:15:07.539138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.735 02:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.302 02:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:06.302 02:15:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.561 02:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:06.561 02:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.820 02:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:06.820 02:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.079 02:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:07.079 02:15:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:07.646 02:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.646 02:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:07.646 02:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.913 02:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:07.913 02:15:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:08.172 02:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:08.172 02:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:08.431 02:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:08.689 02:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:08.689 02:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.256 02:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:09.256 02:15:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:09.514 02:15:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:09.514 [2024-11-08 02:15:11.391008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:09.772 02:15:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:10.030 02:15:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:10.289 02:15:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:10.289 02:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:10.289 02:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:10.289 02:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:10.289 02:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:10.289 02:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:10.289 02:15:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:12.193 02:15:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:12.193 02:15:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:12.193 02:15:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.562 02:15:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:12.562 02:15:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.562 02:15:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:12.562 02:15:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:12.562 [global] 00:11:12.562 thread=1 00:11:12.562 invalidate=1 00:11:12.562 rw=write 00:11:12.562 time_based=1 00:11:12.562 runtime=1 00:11:12.562 ioengine=libaio 00:11:12.562 direct=1 00:11:12.562 bs=4096 00:11:12.562 iodepth=1 00:11:12.562 norandommap=0 00:11:12.562 numjobs=1 00:11:12.562 00:11:12.562 verify_dump=1 00:11:12.562 verify_backlog=512 00:11:12.562 verify_state_save=0 00:11:12.562 do_verify=1 00:11:12.562 verify=crc32c-intel 00:11:12.562 [job0] 00:11:12.562 filename=/dev/nvme0n1 00:11:12.562 [job1] 00:11:12.562 filename=/dev/nvme0n2 00:11:12.562 [job2] 00:11:12.562 filename=/dev/nvme0n3 00:11:12.562 [job3] 00:11:12.562 filename=/dev/nvme0n4 00:11:12.562 Could not set queue depth (nvme0n1) 00:11:12.562 Could not set queue depth (nvme0n2) 00:11:12.562 Could not set queue depth (nvme0n3) 00:11:12.562 Could not set queue depth (nvme0n4) 00:11:12.562 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.562 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.562 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.562 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.562 fio-3.35 00:11:12.562 Starting 4 threads 00:11:13.962 00:11:13.962 job0: (groupid=0, jobs=1): err= 0: pid=79159: Fri Nov 8 02:15:15 2024 00:11:13.962 read: IOPS=2308, BW=9235KiB/s (9456kB/s)(9244KiB/1001msec) 00:11:13.962 slat (nsec): min=12538, max=71725, avg=17876.71, stdev=5869.24 00:11:13.962 clat (usec): min=137, max=6135, avg=204.98, stdev=151.32 00:11:13.962 lat (usec): min=151, max=6150, avg=222.86, stdev=152.00 00:11:13.962 clat percentiles (usec): 00:11:13.962 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:11:13.963 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:11:13.963 | 70.00th=[ 186], 80.00th=[ 200], 90.00th=[ 363], 95.00th=[ 437], 00:11:13.963 | 99.00th=[ 506], 99.50th=[ 553], 99.90th=[ 685], 99.95th=[ 758], 00:11:13.963 | 99.99th=[ 6128] 00:11:13.963 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:13.963 slat (nsec): min=14807, max=97730, avg=25196.07, stdev=8261.88 00:11:13.963 clat (usec): min=90, max=3925, avg=160.33, stdev=154.92 00:11:13.963 lat (usec): min=109, max=3957, avg=185.53, stdev=157.66 00:11:13.963 clat percentiles (usec): 00:11:13.963 | 1.00th=[ 96], 5.00th=[ 104], 10.00th=[ 109], 20.00th=[ 115], 00:11:13.963 | 30.00th=[ 120], 40.00th=[ 124], 50.00th=[ 129], 60.00th=[ 135], 00:11:13.963 | 70.00th=[ 143], 80.00th=[ 155], 90.00th=[ 273], 95.00th=[ 334], 00:11:13.963 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 3163], 99.95th=[ 3752], 00:11:13.963 | 99.99th=[ 3916] 00:11:13.963 bw ( KiB/s): min=12288, max=12288, per=42.90%, avg=12288.00, stdev= 0.00, samples=1 00:11:13.963 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:13.963 lat (usec) : 100=1.35%, 250=87.09%, 500=10.10%, 750=1.29%, 1000=0.04% 00:11:13.963 lat (msec) : 4=0.10%, 10=0.02% 00:11:13.963 cpu : usr=2.80%, sys=8.00%, ctx=4878, majf=0, minf=3 00:11:13.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.963 issued rwts: total=2311,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.963 job1: (groupid=0, jobs=1): err= 0: pid=79160: Fri Nov 8 02:15:15 2024 00:11:13.963 read: IOPS=1214, BW=4859KiB/s (4976kB/s)(4864KiB/1001msec) 00:11:13.963 slat (usec): min=10, max=200, avg=17.77, stdev= 9.31 00:11:13.963 clat (usec): min=207, max=764, avg=417.62, stdev=101.48 00:11:13.963 lat (usec): min=268, max=783, avg=435.39, stdev=103.06 00:11:13.963 clat percentiles (usec): 00:11:13.963 | 1.00th=[ 273], 5.00th=[ 322], 10.00th=[ 334], 20.00th=[ 347], 00:11:13.963 | 30.00th=[ 359], 40.00th=[ 371], 50.00th=[ 383], 60.00th=[ 396], 00:11:13.963 | 70.00th=[ 420], 80.00th=[ 465], 90.00th=[ 594], 95.00th=[ 652], 00:11:13.963 | 99.00th=[ 725], 99.50th=[ 734], 99.90th=[ 766], 99.95th=[ 766], 00:11:13.963 | 99.99th=[ 766] 00:11:13.963 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:13.963 slat (nsec): min=14031, max=88351, avg=21863.18, stdev=5848.26 00:11:13.963 clat (usec): min=164, max=822, avg=280.91, stdev=58.47 00:11:13.963 lat (usec): min=181, max=849, avg=302.77, stdev=59.22 00:11:13.963 clat percentiles (usec): 00:11:13.963 | 1.00th=[ 180], 5.00th=[ 196], 10.00th=[ 210], 20.00th=[ 231], 00:11:13.963 | 30.00th=[ 249], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 293], 00:11:13.963 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 347], 95.00th=[ 371], 00:11:13.963 | 99.00th=[ 457], 99.50th=[ 494], 99.90th=[ 685], 99.95th=[ 824], 00:11:13.963 | 99.99th=[ 824] 00:11:13.963 bw ( KiB/s): min= 7832, max= 7832, per=27.34%, avg=7832.00, stdev= 0.00, samples=1 00:11:13.963 iops : min= 1958, max= 1958, avg=1958.00, stdev= 0.00, samples=1 00:11:13.963 lat (usec) : 250=17.37%, 500=74.71%, 750=7.74%, 1000=0.18% 00:11:13.963 cpu : usr=1.00%, sys=5.00%, ctx=2753, majf=0, minf=15 00:11:13.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.963 issued rwts: total=1216,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.963 job2: (groupid=0, jobs=1): err= 0: pid=79161: Fri Nov 8 02:15:15 2024 00:11:13.963 read: IOPS=1093, BW=4376KiB/s (4481kB/s)(4380KiB/1001msec) 00:11:13.963 slat (usec): min=11, max=104, avg=27.25, stdev=11.01 00:11:13.963 clat (usec): min=196, max=852, avg=416.79, stdev=98.99 00:11:13.963 lat (usec): min=223, max=872, avg=444.04, stdev=103.86 00:11:13.963 clat percentiles (usec): 00:11:13.963 | 1.00th=[ 249], 5.00th=[ 318], 10.00th=[ 334], 20.00th=[ 347], 00:11:13.963 | 30.00th=[ 355], 40.00th=[ 367], 50.00th=[ 379], 60.00th=[ 396], 00:11:13.963 | 70.00th=[ 429], 80.00th=[ 519], 90.00th=[ 578], 95.00th=[ 611], 00:11:13.963 | 99.00th=[ 693], 99.50th=[ 734], 99.90th=[ 766], 99.95th=[ 857], 00:11:13.963 | 99.99th=[ 857] 00:11:13.963 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:13.963 slat (usec): min=14, max=109, avg=34.48, stdev=11.82 00:11:13.963 clat (usec): min=115, max=608, avg=294.99, stdev=121.80 00:11:13.963 lat (usec): min=141, max=648, avg=329.47, stdev=128.53 00:11:13.963 clat percentiles (usec): 00:11:13.963 | 1.00th=[ 127], 5.00th=[ 137], 10.00th=[ 147], 20.00th=[ 165], 00:11:13.963 | 30.00th=[ 235], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 297], 00:11:13.963 | 70.00th=[ 326], 80.00th=[ 400], 90.00th=[ 498], 95.00th=[ 529], 00:11:13.963 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 603], 99.95th=[ 611], 00:11:13.963 | 99.99th=[ 611] 00:11:13.963 bw ( KiB/s): min= 5504, max= 5504, per=19.22%, avg=5504.00, stdev= 0.00, samples=1 00:11:13.963 iops : min= 1376, max= 1376, avg=1376.00, stdev= 0.00, samples=1 00:11:13.963 lat (usec) : 250=19.95%, 500=64.92%, 750=14.98%, 1000=0.15% 00:11:13.963 cpu : usr=2.00%, sys=6.50%, ctx=2632, majf=0, minf=15 00:11:13.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.963 issued rwts: total=1095,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.963 job3: (groupid=0, jobs=1): err= 0: pid=79162: Fri Nov 8 02:15:15 2024 00:11:13.963 read: IOPS=1214, BW=4859KiB/s (4976kB/s)(4864KiB/1001msec) 00:11:13.963 slat (nsec): min=10980, max=70715, avg=22883.06, stdev=8071.39 00:11:13.963 clat (usec): min=247, max=759, avg=411.78, stdev=100.91 00:11:13.963 lat (usec): min=273, max=797, avg=434.66, stdev=103.06 00:11:13.963 clat percentiles (usec): 00:11:13.963 | 1.00th=[ 273], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 343], 00:11:13.963 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 375], 60.00th=[ 388], 00:11:13.963 | 70.00th=[ 412], 80.00th=[ 461], 90.00th=[ 594], 95.00th=[ 635], 00:11:13.963 | 99.00th=[ 709], 99.50th=[ 725], 99.90th=[ 750], 99.95th=[ 758], 00:11:13.963 | 99.99th=[ 758] 00:11:13.963 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:13.963 slat (nsec): min=21378, max=91272, avg=30277.69, stdev=7034.65 00:11:13.963 clat (usec): min=158, max=925, avg=271.92, stdev=56.97 00:11:13.963 lat (usec): min=181, max=954, avg=302.19, stdev=57.68 00:11:13.963 clat percentiles (usec): 00:11:13.963 | 1.00th=[ 174], 5.00th=[ 190], 10.00th=[ 200], 20.00th=[ 223], 00:11:13.963 | 30.00th=[ 241], 40.00th=[ 258], 50.00th=[ 273], 60.00th=[ 285], 00:11:13.963 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 334], 95.00th=[ 355], 00:11:13.963 | 99.00th=[ 441], 99.50th=[ 478], 99.90th=[ 570], 99.95th=[ 922], 00:11:13.963 | 99.99th=[ 922] 00:11:13.963 bw ( KiB/s): min= 7840, max= 7840, per=27.37%, avg=7840.00, stdev= 0.00, samples=1 00:11:13.963 iops : min= 1960, max= 1960, avg=1960.00, stdev= 0.00, samples=1 00:11:13.963 lat (usec) : 250=19.51%, 500=72.71%, 750=7.67%, 1000=0.11% 00:11:13.963 cpu : usr=1.50%, sys=6.80%, ctx=2752, majf=0, minf=6 00:11:13.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.963 issued rwts: total=1216,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.963 00:11:13.963 Run status group 0 (all jobs): 00:11:13.963 READ: bw=22.8MiB/s (23.9MB/s), 4376KiB/s-9235KiB/s (4481kB/s-9456kB/s), io=22.8MiB (23.9MB), run=1001-1001msec 00:11:13.963 WRITE: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:11:13.963 00:11:13.963 Disk stats (read/write): 00:11:13.963 nvme0n1: ios=2098/2508, merge=0/0, ticks=395/413, in_queue=808, util=87.88% 00:11:13.963 nvme0n2: ios=1056/1455, merge=0/0, ticks=397/357, in_queue=754, util=88.41% 00:11:13.963 nvme0n3: ios=1024/1109, merge=0/0, ticks=436/362, in_queue=798, util=89.22% 00:11:13.963 nvme0n4: ios=1024/1455, merge=0/0, ticks=383/409, in_queue=792, util=89.69% 00:11:13.963 02:15:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:13.963 [global] 00:11:13.963 thread=1 00:11:13.963 invalidate=1 00:11:13.963 rw=randwrite 00:11:13.963 time_based=1 00:11:13.963 runtime=1 00:11:13.963 ioengine=libaio 00:11:13.963 direct=1 00:11:13.963 bs=4096 00:11:13.963 iodepth=1 00:11:13.963 norandommap=0 00:11:13.963 numjobs=1 00:11:13.963 00:11:13.963 verify_dump=1 00:11:13.963 verify_backlog=512 00:11:13.963 verify_state_save=0 00:11:13.963 do_verify=1 00:11:13.963 verify=crc32c-intel 00:11:13.963 [job0] 00:11:13.963 filename=/dev/nvme0n1 00:11:13.963 [job1] 00:11:13.963 filename=/dev/nvme0n2 00:11:13.963 [job2] 00:11:13.963 filename=/dev/nvme0n3 00:11:13.963 [job3] 00:11:13.963 filename=/dev/nvme0n4 00:11:13.963 Could not set queue depth (nvme0n1) 00:11:13.963 Could not set queue depth (nvme0n2) 00:11:13.963 Could not set queue depth (nvme0n3) 00:11:13.963 Could not set queue depth (nvme0n4) 00:11:13.963 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.963 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.964 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.964 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.964 fio-3.35 00:11:13.964 Starting 4 threads 00:11:15.339 00:11:15.339 job0: (groupid=0, jobs=1): err= 0: pid=79215: Fri Nov 8 02:15:16 2024 00:11:15.339 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:15.339 slat (nsec): min=8652, max=47237, avg=15040.32, stdev=4009.29 00:11:15.339 clat (usec): min=136, max=564, avg=189.80, stdev=48.01 00:11:15.339 lat (usec): min=149, max=586, avg=204.84, stdev=47.79 00:11:15.339 clat percentiles (usec): 00:11:15.339 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:11:15.339 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:11:15.339 | 70.00th=[ 182], 80.00th=[ 215], 90.00th=[ 260], 95.00th=[ 293], 00:11:15.339 | 99.00th=[ 367], 99.50th=[ 375], 99.90th=[ 400], 99.95th=[ 453], 00:11:15.339 | 99.99th=[ 562] 00:11:15.339 write: IOPS=2910, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:11:15.339 slat (usec): min=13, max=130, avg=22.51, stdev= 6.96 00:11:15.339 clat (usec): min=94, max=581, avg=136.93, stdev=27.97 00:11:15.339 lat (usec): min=112, max=602, avg=159.44, stdev=28.09 00:11:15.339 clat percentiles (usec): 00:11:15.339 | 1.00th=[ 108], 5.00th=[ 114], 10.00th=[ 118], 20.00th=[ 122], 00:11:15.339 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:11:15.339 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 161], 95.00th=[ 204], 00:11:15.339 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 273], 99.95th=[ 285], 00:11:15.339 | 99.99th=[ 578] 00:11:15.339 bw ( KiB/s): min=12288, max=12288, per=26.47%, avg=12288.00, stdev= 0.00, samples=1 00:11:15.339 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:15.339 lat (usec) : 100=0.07%, 250=93.44%, 500=6.45%, 750=0.04% 00:11:15.339 cpu : usr=2.40%, sys=8.40%, ctx=5475, majf=0, minf=7 00:11:15.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.339 issued rwts: total=2560,2913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.339 job1: (groupid=0, jobs=1): err= 0: pid=79216: Fri Nov 8 02:15:16 2024 00:11:15.339 read: IOPS=3037, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1001msec) 00:11:15.339 slat (nsec): min=10839, max=48413, avg=13737.61, stdev=3160.95 00:11:15.339 clat (usec): min=133, max=643, avg=164.29, stdev=17.64 00:11:15.339 lat (usec): min=145, max=655, avg=178.03, stdev=18.35 00:11:15.339 clat percentiles (usec): 00:11:15.340 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:11:15.340 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:11:15.340 | 70.00th=[ 172], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 186], 00:11:15.340 | 99.00th=[ 196], 99.50th=[ 204], 99.90th=[ 297], 99.95th=[ 553], 00:11:15.340 | 99.99th=[ 644] 00:11:15.340 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:15.340 slat (nsec): min=13761, max=68980, avg=20586.10, stdev=3921.86 00:11:15.340 clat (usec): min=92, max=662, avg=125.20, stdev=14.96 00:11:15.340 lat (usec): min=109, max=680, avg=145.79, stdev=15.66 00:11:15.340 clat percentiles (usec): 00:11:15.340 | 1.00th=[ 100], 5.00th=[ 108], 10.00th=[ 112], 20.00th=[ 117], 00:11:15.340 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 128], 00:11:15.340 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 143], 00:11:15.340 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 297], 00:11:15.340 | 99.99th=[ 660] 00:11:15.340 bw ( KiB/s): min=12288, max=12288, per=26.47%, avg=12288.00, stdev= 0.00, samples=1 00:11:15.340 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:15.340 lat (usec) : 100=0.46%, 250=99.39%, 500=0.10%, 750=0.05% 00:11:15.340 cpu : usr=2.30%, sys=8.60%, ctx=6113, majf=0, minf=14 00:11:15.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.340 issued rwts: total=3041,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.340 job2: (groupid=0, jobs=1): err= 0: pid=79217: Fri Nov 8 02:15:16 2024 00:11:15.340 read: IOPS=2476, BW=9906KiB/s (10.1MB/s)(9916KiB/1001msec) 00:11:15.340 slat (nsec): min=11742, max=47902, avg=13915.40, stdev=2248.38 00:11:15.340 clat (usec): min=145, max=377, avg=194.94, stdev=43.63 00:11:15.340 lat (usec): min=159, max=391, avg=208.86, stdev=43.95 00:11:15.340 clat percentiles (usec): 00:11:15.340 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 169], 00:11:15.340 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:11:15.340 | 70.00th=[ 190], 80.00th=[ 202], 90.00th=[ 260], 95.00th=[ 302], 00:11:15.340 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 371], 99.95th=[ 379], 00:11:15.340 | 99.99th=[ 379] 00:11:15.340 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:15.340 slat (usec): min=12, max=121, avg=21.19, stdev= 5.84 00:11:15.340 clat (usec): min=106, max=4638, avg=163.56, stdev=195.71 00:11:15.340 lat (usec): min=125, max=4657, avg=184.75, stdev=196.44 00:11:15.340 clat percentiles (usec): 00:11:15.340 | 1.00th=[ 117], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:11:15.340 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:11:15.340 | 70.00th=[ 149], 80.00th=[ 159], 90.00th=[ 210], 95.00th=[ 233], 00:11:15.340 | 99.00th=[ 351], 99.50th=[ 1401], 99.90th=[ 3949], 99.95th=[ 4555], 00:11:15.340 | 99.99th=[ 4621] 00:11:15.340 bw ( KiB/s): min=12288, max=12288, per=26.47%, avg=12288.00, stdev= 0.00, samples=1 00:11:15.340 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:15.340 lat (usec) : 250=92.34%, 500=7.22%, 750=0.16%, 1000=0.02% 00:11:15.340 lat (msec) : 2=0.16%, 4=0.06%, 10=0.04% 00:11:15.340 cpu : usr=1.90%, sys=7.40%, ctx=5042, majf=0, minf=9 00:11:15.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.340 issued rwts: total=2479,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.340 job3: (groupid=0, jobs=1): err= 0: pid=79218: Fri Nov 8 02:15:16 2024 00:11:15.340 read: IOPS=2615, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:11:15.340 slat (nsec): min=11912, max=45295, avg=14294.56, stdev=2559.01 00:11:15.340 clat (usec): min=144, max=2083, avg=179.47, stdev=41.36 00:11:15.340 lat (usec): min=156, max=2109, avg=193.76, stdev=41.59 00:11:15.340 clat percentiles (usec): 00:11:15.340 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 169], 00:11:15.340 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:11:15.340 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:11:15.340 | 99.00th=[ 217], 99.50th=[ 225], 99.90th=[ 498], 99.95th=[ 685], 00:11:15.340 | 99.99th=[ 2089] 00:11:15.340 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:15.340 slat (nsec): min=14345, max=84826, avg=20616.00, stdev=4640.82 00:11:15.340 clat (usec): min=108, max=716, avg=136.59, stdev=15.36 00:11:15.340 lat (usec): min=127, max=736, avg=157.20, stdev=16.07 00:11:15.340 clat percentiles (usec): 00:11:15.340 | 1.00th=[ 117], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 127], 00:11:15.340 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:11:15.340 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:11:15.340 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 190], 99.95th=[ 200], 00:11:15.340 | 99.99th=[ 717] 00:11:15.340 bw ( KiB/s): min=12288, max=12288, per=26.47%, avg=12288.00, stdev= 0.00, samples=1 00:11:15.340 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:15.340 lat (usec) : 250=99.86%, 500=0.09%, 750=0.04% 00:11:15.340 lat (msec) : 4=0.02% 00:11:15.340 cpu : usr=2.20%, sys=7.80%, ctx=5691, majf=0, minf=15 00:11:15.340 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:15.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:15.340 issued rwts: total=2618,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:15.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:15.340 00:11:15.340 Run status group 0 (all jobs): 00:11:15.340 READ: bw=41.7MiB/s (43.8MB/s), 9906KiB/s-11.9MiB/s (10.1MB/s-12.4MB/s), io=41.8MiB (43.8MB), run=1001-1001msec 00:11:15.340 WRITE: bw=45.3MiB/s (47.5MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=45.4MiB (47.6MB), run=1001-1001msec 00:11:15.340 00:11:15.340 Disk stats (read/write): 00:11:15.340 nvme0n1: ios=2328/2560, merge=0/0, ticks=472/365, in_queue=837, util=87.78% 00:11:15.340 nvme0n2: ios=2580/2587, merge=0/0, ticks=437/343, in_queue=780, util=86.48% 00:11:15.340 nvme0n3: ios=2048/2435, merge=0/0, ticks=369/382, in_queue=751, util=88.08% 00:11:15.340 nvme0n4: ios=2254/2560, merge=0/0, ticks=415/368, in_queue=783, util=89.44% 00:11:15.340 02:15:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:15.340 [global] 00:11:15.340 thread=1 00:11:15.340 invalidate=1 00:11:15.340 rw=write 00:11:15.340 time_based=1 00:11:15.340 runtime=1 00:11:15.340 ioengine=libaio 00:11:15.340 direct=1 00:11:15.340 bs=4096 00:11:15.340 iodepth=128 00:11:15.340 norandommap=0 00:11:15.340 numjobs=1 00:11:15.340 00:11:15.340 verify_dump=1 00:11:15.340 verify_backlog=512 00:11:15.340 verify_state_save=0 00:11:15.340 do_verify=1 00:11:15.340 verify=crc32c-intel 00:11:15.340 [job0] 00:11:15.340 filename=/dev/nvme0n1 00:11:15.340 [job1] 00:11:15.340 filename=/dev/nvme0n2 00:11:15.340 [job2] 00:11:15.340 filename=/dev/nvme0n3 00:11:15.340 [job3] 00:11:15.340 filename=/dev/nvme0n4 00:11:15.340 Could not set queue depth (nvme0n1) 00:11:15.340 Could not set queue depth (nvme0n2) 00:11:15.340 Could not set queue depth (nvme0n3) 00:11:15.340 Could not set queue depth (nvme0n4) 00:11:15.340 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:15.340 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:15.340 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:15.340 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:15.340 fio-3.35 00:11:15.340 Starting 4 threads 00:11:16.718 00:11:16.718 job0: (groupid=0, jobs=1): err= 0: pid=79273: Fri Nov 8 02:15:18 2024 00:11:16.718 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:11:16.718 slat (usec): min=8, max=17634, avg=262.18, stdev=1031.01 00:11:16.718 clat (usec): min=17293, max=55318, avg=33590.80, stdev=6500.70 00:11:16.718 lat (usec): min=18848, max=55330, avg=33852.98, stdev=6526.00 00:11:16.718 clat percentiles (usec): 00:11:16.718 | 1.00th=[19530], 5.00th=[23200], 10.00th=[24773], 20.00th=[27657], 00:11:16.718 | 30.00th=[30016], 40.00th=[32113], 50.00th=[33817], 60.00th=[35914], 00:11:16.718 | 70.00th=[37487], 80.00th=[38536], 90.00th=[41681], 95.00th=[43254], 00:11:16.718 | 99.00th=[50070], 99.50th=[51643], 99.90th=[55313], 99.95th=[55313], 00:11:16.718 | 99.99th=[55313] 00:11:16.718 write: IOPS=2351, BW=9405KiB/s (9631kB/s)(9452KiB/1005msec); 0 zone resets 00:11:16.718 slat (usec): min=5, max=7889, avg=188.57, stdev=713.38 00:11:16.718 clat (usec): min=4496, max=38135, avg=24552.71, stdev=5487.07 00:11:16.718 lat (usec): min=7534, max=39082, avg=24741.29, stdev=5510.83 00:11:16.718 clat percentiles (usec): 00:11:16.718 | 1.00th=[ 9634], 5.00th=[17171], 10.00th=[18220], 20.00th=[19006], 00:11:16.718 | 30.00th=[21103], 40.00th=[22938], 50.00th=[24773], 60.00th=[25822], 00:11:16.718 | 70.00th=[27132], 80.00th=[28705], 90.00th=[31851], 95.00th=[34341], 00:11:16.718 | 99.00th=[36439], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:11:16.718 | 99.99th=[38011] 00:11:16.718 bw ( KiB/s): min= 7976, max= 9912, per=16.93%, avg=8944.00, stdev=1368.96, samples=2 00:11:16.718 iops : min= 1994, max= 2478, avg=2236.00, stdev=342.24, samples=2 00:11:16.718 lat (msec) : 10=0.68%, 20=13.35%, 50=85.67%, 100=0.29% 00:11:16.718 cpu : usr=2.39%, sys=6.18%, ctx=713, majf=0, minf=13 00:11:16.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:11:16.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.718 issued rwts: total=2048,2363,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.718 job1: (groupid=0, jobs=1): err= 0: pid=79274: Fri Nov 8 02:15:18 2024 00:11:16.718 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:11:16.718 slat (usec): min=4, max=2697, avg=76.98, stdev=355.32 00:11:16.718 clat (usec): min=7691, max=12689, avg=10396.93, stdev=831.18 00:11:16.718 lat (usec): min=9288, max=12701, avg=10473.91, stdev=759.85 00:11:16.718 clat percentiles (usec): 00:11:16.718 | 1.00th=[ 8094], 5.00th=[ 9634], 10.00th=[ 9634], 20.00th=[ 9896], 00:11:16.718 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:11:16.718 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11863], 95.00th=[12125], 00:11:16.718 | 99.00th=[12518], 99.50th=[12518], 99.90th=[12649], 99.95th=[12649], 00:11:16.718 | 99.99th=[12649] 00:11:16.718 write: IOPS=6292, BW=24.6MiB/s (25.8MB/s)(24.6MiB/1002msec); 0 zone resets 00:11:16.718 slat (usec): min=10, max=4221, avg=76.19, stdev=314.15 00:11:16.718 clat (usec): min=222, max=13586, avg=9948.84, stdev=1136.53 00:11:16.718 lat (usec): min=1930, max=13617, avg=10025.03, stdev=1096.02 00:11:16.718 clat percentiles (usec): 00:11:16.718 | 1.00th=[ 5014], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9503], 00:11:16.718 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9765], 60.00th=[ 9896], 00:11:16.718 | 70.00th=[10028], 80.00th=[10159], 90.00th=[11469], 95.00th=[11994], 00:11:16.718 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13435], 99.95th=[13566], 00:11:16.718 | 99.99th=[13566] 00:11:16.718 bw ( KiB/s): min=24576, max=24848, per=46.77%, avg=24712.00, stdev=192.33, samples=2 00:11:16.718 iops : min= 6144, max= 6212, avg=6178.00, stdev=48.08, samples=2 00:11:16.718 lat (usec) : 250=0.01% 00:11:16.718 lat (msec) : 2=0.03%, 4=0.22%, 10=53.53%, 20=46.20% 00:11:16.719 cpu : usr=4.70%, sys=16.28%, ctx=390, majf=0, minf=8 00:11:16.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:16.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.719 issued rwts: total=6144,6305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.719 job2: (groupid=0, jobs=1): err= 0: pid=79275: Fri Nov 8 02:15:18 2024 00:11:16.719 read: IOPS=1904, BW=7617KiB/s (7800kB/s)(7640KiB/1003msec) 00:11:16.719 slat (usec): min=3, max=8578, avg=280.69, stdev=986.86 00:11:16.719 clat (usec): min=1149, max=49793, avg=33813.94, stdev=7285.97 00:11:16.719 lat (usec): min=5161, max=49809, avg=34094.63, stdev=7284.40 00:11:16.719 clat percentiles (usec): 00:11:16.719 | 1.00th=[13698], 5.00th=[22938], 10.00th=[24773], 20.00th=[27132], 00:11:16.719 | 30.00th=[30016], 40.00th=[32900], 50.00th=[34866], 60.00th=[36963], 00:11:16.719 | 70.00th=[38536], 80.00th=[40109], 90.00th=[42206], 95.00th=[43254], 00:11:16.719 | 99.00th=[45876], 99.50th=[46924], 99.90th=[47449], 99.95th=[49546], 00:11:16.719 | 99.99th=[49546] 00:11:16.719 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:11:16.719 slat (usec): min=6, max=7491, avg=219.42, stdev=760.89 00:11:16.719 clat (usec): min=16904, max=44270, avg=29922.15, stdev=5208.90 00:11:16.719 lat (usec): min=16928, max=44302, avg=30141.57, stdev=5224.90 00:11:16.719 clat percentiles (usec): 00:11:16.719 | 1.00th=[19006], 5.00th=[20841], 10.00th=[22152], 20.00th=[24511], 00:11:16.719 | 30.00th=[27132], 40.00th=[29492], 50.00th=[31065], 60.00th=[31851], 00:11:16.719 | 70.00th=[32900], 80.00th=[33817], 90.00th=[36439], 95.00th=[36963], 00:11:16.719 | 99.00th=[41681], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:11:16.719 | 99.99th=[44303] 00:11:16.719 bw ( KiB/s): min= 8175, max= 8192, per=15.49%, avg=8183.50, stdev=12.02, samples=2 00:11:16.719 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:11:16.719 lat (msec) : 2=0.03%, 10=0.40%, 20=3.26%, 50=96.31% 00:11:16.719 cpu : usr=1.70%, sys=5.89%, ctx=771, majf=0, minf=15 00:11:16.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:11:16.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.719 issued rwts: total=1910,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.719 job3: (groupid=0, jobs=1): err= 0: pid=79276: Fri Nov 8 02:15:18 2024 00:11:16.719 read: IOPS=2253, BW=9015KiB/s (9231kB/s)(9060KiB/1005msec) 00:11:16.719 slat (usec): min=3, max=11718, avg=227.79, stdev=951.05 00:11:16.719 clat (usec): min=1080, max=49579, avg=29088.16, stdev=10206.73 00:11:16.719 lat (usec): min=5146, max=49603, avg=29315.95, stdev=10270.34 00:11:16.719 clat percentiles (usec): 00:11:16.719 | 1.00th=[10421], 5.00th=[13435], 10.00th=[13566], 20.00th=[14615], 00:11:16.719 | 30.00th=[24773], 40.00th=[30540], 50.00th=[32637], 60.00th=[34341], 00:11:16.719 | 70.00th=[35914], 80.00th=[36963], 90.00th=[40633], 95.00th=[42730], 00:11:16.719 | 99.00th=[45876], 99.50th=[47449], 99.90th=[49021], 99.95th=[49021], 00:11:16.719 | 99.99th=[49546] 00:11:16.719 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:11:16.719 slat (usec): min=6, max=8674, avg=182.84, stdev=685.93 00:11:16.719 clat (usec): min=11808, max=39199, avg=23869.87, stdev=8665.11 00:11:16.719 lat (usec): min=11831, max=39213, avg=24052.71, stdev=8728.87 00:11:16.719 clat percentiles (usec): 00:11:16.719 | 1.00th=[12256], 5.00th=[12649], 10.00th=[12780], 20.00th=[13173], 00:11:16.719 | 30.00th=[14615], 40.00th=[19792], 50.00th=[27132], 60.00th=[28967], 00:11:16.719 | 70.00th=[31065], 80.00th=[32375], 90.00th=[33817], 95.00th=[34866], 00:11:16.719 | 99.00th=[36439], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:11:16.719 | 99.99th=[39060] 00:11:16.719 bw ( KiB/s): min= 8192, max=12288, per=19.38%, avg=10240.00, stdev=2896.31, samples=2 00:11:16.719 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:11:16.719 lat (msec) : 2=0.02%, 10=0.35%, 20=33.64%, 50=65.99% 00:11:16.719 cpu : usr=2.19%, sys=6.87%, ctx=682, majf=0, minf=13 00:11:16.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:16.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.719 issued rwts: total=2265,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.719 00:11:16.719 Run status group 0 (all jobs): 00:11:16.719 READ: bw=48.1MiB/s (50.4MB/s), 7617KiB/s-24.0MiB/s (7800kB/s-25.1MB/s), io=48.3MiB (50.7MB), run=1002-1005msec 00:11:16.719 WRITE: bw=51.6MiB/s (54.1MB/s), 8167KiB/s-24.6MiB/s (8364kB/s-25.8MB/s), io=51.9MiB (54.4MB), run=1002-1005msec 00:11:16.719 00:11:16.719 Disk stats (read/write): 00:11:16.719 nvme0n1: ios=1843/2048, merge=0/0, ticks=19127/14969, in_queue=34096, util=88.87% 00:11:16.719 nvme0n2: ios=5140/5556, merge=0/0, ticks=11416/11750, in_queue=23166, util=88.19% 00:11:16.719 nvme0n3: ios=1536/1842, merge=0/0, ticks=17111/16958, in_queue=34069, util=88.90% 00:11:16.719 nvme0n4: ios=2048/2157, merge=0/0, ticks=18969/14772, in_queue=33741, util=89.36% 00:11:16.719 02:15:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:16.719 [global] 00:11:16.719 thread=1 00:11:16.719 invalidate=1 00:11:16.719 rw=randwrite 00:11:16.719 time_based=1 00:11:16.719 runtime=1 00:11:16.719 ioengine=libaio 00:11:16.719 direct=1 00:11:16.719 bs=4096 00:11:16.719 iodepth=128 00:11:16.719 norandommap=0 00:11:16.719 numjobs=1 00:11:16.719 00:11:16.719 verify_dump=1 00:11:16.719 verify_backlog=512 00:11:16.719 verify_state_save=0 00:11:16.719 do_verify=1 00:11:16.719 verify=crc32c-intel 00:11:16.719 [job0] 00:11:16.719 filename=/dev/nvme0n1 00:11:16.719 [job1] 00:11:16.719 filename=/dev/nvme0n2 00:11:16.719 [job2] 00:11:16.719 filename=/dev/nvme0n3 00:11:16.719 [job3] 00:11:16.719 filename=/dev/nvme0n4 00:11:16.719 Could not set queue depth (nvme0n1) 00:11:16.719 Could not set queue depth (nvme0n2) 00:11:16.719 Could not set queue depth (nvme0n3) 00:11:16.719 Could not set queue depth (nvme0n4) 00:11:16.719 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.719 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.719 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.719 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.719 fio-3.35 00:11:16.719 Starting 4 threads 00:11:18.096 00:11:18.096 job0: (groupid=0, jobs=1): err= 0: pid=79339: Fri Nov 8 02:15:19 2024 00:11:18.096 read: IOPS=2773, BW=10.8MiB/s (11.4MB/s)(10.9MiB/1004msec) 00:11:18.096 slat (usec): min=4, max=13009, avg=187.77, stdev=1018.27 00:11:18.096 clat (usec): min=269, max=53804, avg=22834.51, stdev=6872.08 00:11:18.096 lat (usec): min=6904, max=53822, avg=23022.27, stdev=6853.15 00:11:18.096 clat percentiles (usec): 00:11:18.096 | 1.00th=[ 7504], 5.00th=[18482], 10.00th=[18744], 20.00th=[20579], 00:11:18.096 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21103], 60.00th=[21627], 00:11:18.096 | 70.00th=[21890], 80.00th=[24511], 90.00th=[25560], 95.00th=[34341], 00:11:18.096 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:11:18.096 | 99.99th=[53740] 00:11:18.096 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:11:18.096 slat (usec): min=13, max=6003, avg=147.30, stdev=687.22 00:11:18.096 clat (usec): min=12996, max=40064, avg=20539.97, stdev=3046.45 00:11:18.096 lat (usec): min=13433, max=40081, avg=20687.27, stdev=2954.35 00:11:18.096 clat percentiles (usec): 00:11:18.096 | 1.00th=[15533], 5.00th=[16581], 10.00th=[17171], 20.00th=[19006], 00:11:18.096 | 30.00th=[20055], 40.00th=[20055], 50.00th=[20317], 60.00th=[20579], 00:11:18.096 | 70.00th=[20841], 80.00th=[21365], 90.00th=[22676], 95.00th=[25297], 00:11:18.096 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:11:18.096 | 99.99th=[40109] 00:11:18.096 bw ( KiB/s): min=12288, max=12288, per=23.56%, avg=12288.00, stdev= 0.00, samples=2 00:11:18.096 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:18.096 lat (usec) : 500=0.02% 00:11:18.096 lat (msec) : 10=0.55%, 20=25.01%, 50=73.37%, 100=1.06% 00:11:18.096 cpu : usr=3.09%, sys=9.37%, ctx=184, majf=0, minf=5 00:11:18.096 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:18.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.096 issued rwts: total=2785,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.096 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.096 job1: (groupid=0, jobs=1): err= 0: pid=79340: Fri Nov 8 02:15:19 2024 00:11:18.096 read: IOPS=3709, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1001msec) 00:11:18.096 slat (usec): min=9, max=5642, avg=122.69, stdev=649.30 00:11:18.096 clat (usec): min=268, max=22367, avg=16023.80, stdev=5603.35 00:11:18.096 lat (usec): min=2417, max=22389, avg=16146.50, stdev=5605.80 00:11:18.096 clat percentiles (usec): 00:11:18.096 | 1.00th=[ 4948], 5.00th=[10028], 10.00th=[10159], 20.00th=[10290], 00:11:18.096 | 30.00th=[10421], 40.00th=[10552], 50.00th=[20579], 60.00th=[20841], 00:11:18.096 | 70.00th=[20841], 80.00th=[21103], 90.00th=[21627], 95.00th=[21890], 00:11:18.096 | 99.00th=[22152], 99.50th=[22414], 99.90th=[22414], 99.95th=[22414], 00:11:18.096 | 99.99th=[22414] 00:11:18.096 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:11:18.096 slat (usec): min=11, max=5773, avg=125.46, stdev=631.57 00:11:18.096 clat (usec): min=7592, max=21702, avg=16252.53, stdev=5132.74 00:11:18.096 lat (usec): min=9227, max=22003, avg=16377.99, stdev=5135.01 00:11:18.096 clat percentiles (usec): 00:11:18.096 | 1.00th=[ 8356], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[ 9896], 00:11:18.096 | 30.00th=[10028], 40.00th=[16057], 50.00th=[20055], 60.00th=[20055], 00:11:18.096 | 70.00th=[20317], 80.00th=[20579], 90.00th=[21103], 95.00th=[21103], 00:11:18.096 | 99.00th=[21627], 99.50th=[21627], 99.90th=[21627], 99.95th=[21627], 00:11:18.096 | 99.99th=[21627] 00:11:18.096 bw ( KiB/s): min=12263, max=12263, per=23.51%, avg=12263.00, stdev= 0.00, samples=1 00:11:18.096 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:11:18.096 lat (usec) : 500=0.01% 00:11:18.096 lat (msec) : 4=0.41%, 10=16.94%, 20=31.17%, 50=51.47% 00:11:18.096 cpu : usr=3.30%, sys=10.70%, ctx=245, majf=0, minf=1 00:11:18.096 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:18.096 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.096 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.096 issued rwts: total=3713,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.097 job2: (groupid=0, jobs=1): err= 0: pid=79341: Fri Nov 8 02:15:19 2024 00:11:18.097 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:11:18.097 slat (usec): min=11, max=12760, avg=190.68, stdev=1039.31 00:11:18.097 clat (usec): min=16080, max=53366, avg=25160.11, stdev=8530.18 00:11:18.097 lat (usec): min=19966, max=53392, avg=25350.80, stdev=8533.04 00:11:18.097 clat percentiles (usec): 00:11:18.097 | 1.00th=[16450], 5.00th=[20579], 10.00th=[20841], 20.00th=[20841], 00:11:18.097 | 30.00th=[21103], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:11:18.097 | 70.00th=[21890], 80.00th=[22414], 90.00th=[38011], 95.00th=[44827], 00:11:18.097 | 99.00th=[53216], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:11:18.097 | 99.99th=[53216] 00:11:18.097 write: IOPS=2933, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1004msec); 0 zone resets 00:11:18.097 slat (usec): min=11, max=12959, avg=165.52, stdev=801.91 00:11:18.097 clat (usec): min=609, max=34821, avg=20973.97, stdev=3255.94 00:11:18.097 lat (usec): min=6451, max=34862, avg=21139.48, stdev=3168.92 00:11:18.097 clat percentiles (usec): 00:11:18.097 | 1.00th=[ 7242], 5.00th=[19006], 10.00th=[19530], 20.00th=[19792], 00:11:18.097 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20317], 60.00th=[20579], 00:11:18.097 | 70.00th=[20841], 80.00th=[21365], 90.00th=[26084], 95.00th=[26870], 00:11:18.097 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:11:18.097 | 99.99th=[34866] 00:11:18.097 bw ( KiB/s): min=10248, max=12288, per=21.61%, avg=11268.00, stdev=1442.50, samples=2 00:11:18.097 iops : min= 2562, max= 3072, avg=2817.00, stdev=360.62, samples=2 00:11:18.097 lat (usec) : 750=0.02% 00:11:18.097 lat (msec) : 10=0.58%, 20=14.06%, 50=83.85%, 100=1.49% 00:11:18.097 cpu : usr=2.59%, sys=9.27%, ctx=213, majf=0, minf=6 00:11:18.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:18.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.097 issued rwts: total=2560,2945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.097 job3: (groupid=0, jobs=1): err= 0: pid=79342: Fri Nov 8 02:15:19 2024 00:11:18.097 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:11:18.097 slat (usec): min=9, max=8936, avg=190.43, stdev=934.34 00:11:18.097 clat (usec): min=15313, max=52259, avg=24947.36, stdev=8409.82 00:11:18.097 lat (usec): min=20136, max=52284, avg=25137.79, stdev=8422.76 00:11:18.097 clat percentiles (usec): 00:11:18.097 | 1.00th=[16188], 5.00th=[20579], 10.00th=[20579], 20.00th=[20841], 00:11:18.097 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21103], 60.00th=[21365], 00:11:18.097 | 70.00th=[21627], 80.00th=[22414], 90.00th=[38011], 95.00th=[44827], 00:11:18.097 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:11:18.097 | 99.99th=[52167] 00:11:18.097 write: IOPS=2968, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1003msec); 0 zone resets 00:11:18.097 slat (usec): min=11, max=8639, avg=164.97, stdev=810.71 00:11:18.097 clat (usec): min=526, max=36650, avg=20999.79, stdev=3349.16 00:11:18.097 lat (usec): min=5364, max=36668, avg=21164.76, stdev=3259.46 00:11:18.097 clat percentiles (usec): 00:11:18.097 | 1.00th=[ 6128], 5.00th=[18744], 10.00th=[19792], 20.00th=[20055], 00:11:18.097 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20317], 60.00th=[20579], 00:11:18.097 | 70.00th=[20841], 80.00th=[21365], 90.00th=[26346], 95.00th=[27395], 00:11:18.097 | 99.00th=[33162], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:11:18.097 | 99.99th=[36439] 00:11:18.097 bw ( KiB/s): min=10504, max=12312, per=21.87%, avg=11408.00, stdev=1278.45, samples=2 00:11:18.097 iops : min= 2626, max= 3078, avg=2852.00, stdev=319.61, samples=2 00:11:18.097 lat (usec) : 750=0.02% 00:11:18.097 lat (msec) : 10=0.58%, 20=11.36%, 50=86.96%, 100=1.08% 00:11:18.097 cpu : usr=3.19%, sys=7.49%, ctx=225, majf=0, minf=7 00:11:18.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:18.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:18.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:18.097 issued rwts: total=2560,2977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:18.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:18.097 00:11:18.097 Run status group 0 (all jobs): 00:11:18.097 READ: bw=45.2MiB/s (47.4MB/s), 9.96MiB/s-14.5MiB/s (10.4MB/s-15.2MB/s), io=45.4MiB (47.6MB), run=1001-1004msec 00:11:18.097 WRITE: bw=50.9MiB/s (53.4MB/s), 11.5MiB/s-16.0MiB/s (12.0MB/s-16.8MB/s), io=51.1MiB (53.6MB), run=1001-1004msec 00:11:18.097 00:11:18.097 Disk stats (read/write): 00:11:18.097 nvme0n1: ios=2610/2624, merge=0/0, ticks=13500/11529, in_queue=25029, util=89.38% 00:11:18.097 nvme0n2: ios=2993/3072, merge=0/0, ticks=11272/11729, in_queue=23001, util=88.07% 00:11:18.097 nvme0n3: ios=2400/2560, merge=0/0, ticks=13256/11739, in_queue=24995, util=89.12% 00:11:18.097 nvme0n4: ios=2400/2560, merge=0/0, ticks=12652/10625, in_queue=23277, util=89.57% 00:11:18.097 02:15:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:18.097 02:15:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=79356 00:11:18.097 02:15:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:18.097 02:15:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:18.097 [global] 00:11:18.097 thread=1 00:11:18.097 invalidate=1 00:11:18.097 rw=read 00:11:18.097 time_based=1 00:11:18.097 runtime=10 00:11:18.097 ioengine=libaio 00:11:18.097 direct=1 00:11:18.097 bs=4096 00:11:18.097 iodepth=1 00:11:18.097 norandommap=1 00:11:18.097 numjobs=1 00:11:18.097 00:11:18.097 [job0] 00:11:18.097 filename=/dev/nvme0n1 00:11:18.097 [job1] 00:11:18.097 filename=/dev/nvme0n2 00:11:18.097 [job2] 00:11:18.097 filename=/dev/nvme0n3 00:11:18.097 [job3] 00:11:18.097 filename=/dev/nvme0n4 00:11:18.097 Could not set queue depth (nvme0n1) 00:11:18.097 Could not set queue depth (nvme0n2) 00:11:18.097 Could not set queue depth (nvme0n3) 00:11:18.097 Could not set queue depth (nvme0n4) 00:11:18.097 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.097 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.097 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.097 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:18.097 fio-3.35 00:11:18.097 Starting 4 threads 00:11:21.381 02:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:21.381 fio: pid=79399, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:21.381 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=48254976, buflen=4096 00:11:21.381 02:15:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:21.639 fio: pid=79398, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:21.639 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=51281920, buflen=4096 00:11:21.639 02:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:21.639 02:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:21.897 fio: pid=79396, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:21.897 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=52019200, buflen=4096 00:11:21.897 02:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:21.897 02:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:22.157 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=57659392, buflen=4096 00:11:22.157 fio: pid=79397, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:22.157 02:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.157 02:15:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:22.157 00:11:22.157 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79396: Fri Nov 8 02:15:23 2024 00:11:22.157 read: IOPS=3554, BW=13.9MiB/s (14.6MB/s)(49.6MiB/3573msec) 00:11:22.157 slat (usec): min=8, max=15591, avg=18.12, stdev=250.67 00:11:22.157 clat (usec): min=144, max=4180, avg=261.84, stdev=74.22 00:11:22.157 lat (usec): min=159, max=15898, avg=279.96, stdev=262.35 00:11:22.157 clat percentiles (usec): 00:11:22.157 | 1.00th=[ 174], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 235], 00:11:22.157 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 262], 00:11:22.157 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 326], 00:11:22.157 | 99.00th=[ 416], 99.50th=[ 445], 99.90th=[ 1156], 99.95th=[ 1729], 00:11:22.157 | 99.99th=[ 3228] 00:11:22.157 bw ( KiB/s): min=12763, max=14856, per=27.06%, avg=14369.83, stdev=796.83, samples=6 00:11:22.157 iops : min= 3190, max= 3714, avg=3592.33, stdev=199.51, samples=6 00:11:22.157 lat (usec) : 250=43.37%, 500=56.27%, 750=0.19%, 1000=0.06% 00:11:22.157 lat (msec) : 2=0.08%, 4=0.02%, 10=0.01% 00:11:22.157 cpu : usr=1.20%, sys=4.34%, ctx=12710, majf=0, minf=1 00:11:22.157 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.157 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.157 issued rwts: total=12701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.157 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.157 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79397: Fri Nov 8 02:15:23 2024 00:11:22.157 read: IOPS=3658, BW=14.3MiB/s (15.0MB/s)(55.0MiB/3848msec) 00:11:22.157 slat (usec): min=8, max=15776, avg=19.64, stdev=197.50 00:11:22.157 clat (usec): min=132, max=3797, avg=252.03, stdev=76.25 00:11:22.157 lat (usec): min=146, max=16009, avg=271.67, stdev=212.89 00:11:22.157 clat percentiles (usec): 00:11:22.157 | 1.00th=[ 159], 5.00th=[ 172], 10.00th=[ 192], 20.00th=[ 225], 00:11:22.157 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:11:22.157 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 314], 00:11:22.157 | 99.00th=[ 396], 99.50th=[ 482], 99.90th=[ 1188], 99.95th=[ 1713], 00:11:22.157 | 99.99th=[ 3064] 00:11:22.157 bw ( KiB/s): min=13176, max=14904, per=26.88%, avg=14274.29, stdev=684.72, samples=7 00:11:22.157 iops : min= 3294, max= 3726, avg=3568.57, stdev=171.18, samples=7 00:11:22.157 lat (usec) : 250=51.34%, 500=48.18%, 750=0.29%, 1000=0.05% 00:11:22.157 lat (msec) : 2=0.09%, 4=0.04% 00:11:22.157 cpu : usr=1.27%, sys=4.89%, ctx=14088, majf=0, minf=2 00:11:22.157 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.157 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.157 issued rwts: total=14078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.157 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.158 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79398: Fri Nov 8 02:15:23 2024 00:11:22.158 read: IOPS=3819, BW=14.9MiB/s (15.6MB/s)(48.9MiB/3278msec) 00:11:22.158 slat (usec): min=8, max=11385, avg=14.52, stdev=122.59 00:11:22.158 clat (usec): min=43, max=5853, avg=245.90, stdev=108.40 00:11:22.158 lat (usec): min=161, max=11729, avg=260.42, stdev=164.74 00:11:22.158 clat percentiles (usec): 00:11:22.158 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 188], 00:11:22.158 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 253], 00:11:22.158 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 310], 95.00th=[ 334], 00:11:22.158 | 99.00th=[ 396], 99.50th=[ 412], 99.90th=[ 1369], 99.95th=[ 2507], 00:11:22.158 | 99.99th=[ 4228] 00:11:22.158 bw ( KiB/s): min=11753, max=20440, per=29.49%, avg=15657.50, stdev=2807.58, samples=6 00:11:22.158 iops : min= 2938, max= 5110, avg=3914.33, stdev=701.96, samples=6 00:11:22.158 lat (usec) : 50=0.01%, 250=56.94%, 500=42.78%, 750=0.10%, 1000=0.05% 00:11:22.158 lat (msec) : 2=0.04%, 4=0.06%, 10=0.02% 00:11:22.158 cpu : usr=1.07%, sys=4.61%, ctx=12532, majf=0, minf=2 00:11:22.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.158 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.158 issued rwts: total=12521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.158 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79399: Fri Nov 8 02:15:23 2024 00:11:22.158 read: IOPS=3997, BW=15.6MiB/s (16.4MB/s)(46.0MiB/2947msec) 00:11:22.158 slat (nsec): min=8428, max=65129, avg=13593.73, stdev=3598.25 00:11:22.158 clat (usec): min=144, max=5431, avg=234.91, stdev=84.28 00:11:22.158 lat (usec): min=156, max=5454, avg=248.50, stdev=84.30 00:11:22.158 clat percentiles (usec): 00:11:22.158 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 178], 00:11:22.158 | 30.00th=[ 208], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 247], 00:11:22.158 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 334], 00:11:22.158 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 660], 99.95th=[ 979], 00:11:22.158 | 99.99th=[ 3785] 00:11:22.158 bw ( KiB/s): min=15016, max=20992, per=31.60%, avg=16777.60, stdev=2562.42, samples=5 00:11:22.158 iops : min= 3754, max= 5248, avg=4194.40, stdev=640.61, samples=5 00:11:22.158 lat (usec) : 250=65.20%, 500=34.65%, 750=0.06%, 1000=0.03% 00:11:22.158 lat (msec) : 2=0.01%, 4=0.03%, 10=0.01% 00:11:22.158 cpu : usr=1.32%, sys=5.23%, ctx=11787, majf=0, minf=2 00:11:22.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.158 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.158 issued rwts: total=11782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.158 00:11:22.158 Run status group 0 (all jobs): 00:11:22.158 READ: bw=51.9MiB/s (54.4MB/s), 13.9MiB/s-15.6MiB/s (14.6MB/s-16.4MB/s), io=200MiB (209MB), run=2947-3848msec 00:11:22.158 00:11:22.158 Disk stats (read/write): 00:11:22.158 nvme0n1: ios=11834/0, merge=0/0, ticks=3091/0, in_queue=3091, util=94.85% 00:11:22.158 nvme0n2: ios=12869/0, merge=0/0, ticks=3368/0, in_queue=3368, util=95.50% 00:11:22.158 nvme0n3: ios=12043/0, merge=0/0, ticks=2795/0, in_queue=2795, util=95.84% 00:11:22.158 nvme0n4: ios=11598/0, merge=0/0, ticks=2636/0, in_queue=2636, util=96.52% 00:11:22.417 02:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.417 02:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:22.675 02:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.675 02:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:22.934 02:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.934 02:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:23.193 02:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:23.193 02:15:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:23.452 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:23.452 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 79356 00:11:23.452 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:23.452 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:23.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.452 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:23.452 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:23.452 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.452 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:23.452 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:23.711 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.711 nvmf hotplug test: fio failed as expected 00:11:23.711 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:23.711 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:23.711 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:23.711 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.970 rmmod nvme_tcp 00:11:23.970 rmmod nvme_fabrics 00:11:23.970 rmmod nvme_keyring 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 78971 ']' 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 78971 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 78971 ']' 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 78971 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78971 00:11:23.970 killing process with pid 78971 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78971' 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 78971 00:11:23.970 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 78971 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:24.229 02:15:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:24.229 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:24.229 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:24.229 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:24.229 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:24.229 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:24.229 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.488 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.488 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.488 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:24.488 00:11:24.488 real 0m19.868s 00:11:24.488 user 1m14.862s 00:11:24.488 sys 0m10.314s 00:11:24.488 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.488 ************************************ 00:11:24.488 END TEST nvmf_fio_target 00:11:24.488 ************************************ 00:11:24.488 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.488 02:15:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:24.488 02:15:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:24.488 02:15:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.488 02:15:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:24.488 ************************************ 00:11:24.488 START TEST nvmf_bdevio 00:11:24.488 ************************************ 00:11:24.488 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:24.489 * Looking for test storage... 00:11:24.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:24.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.489 --rc genhtml_branch_coverage=1 00:11:24.489 --rc genhtml_function_coverage=1 00:11:24.489 --rc genhtml_legend=1 00:11:24.489 --rc geninfo_all_blocks=1 00:11:24.489 --rc geninfo_unexecuted_blocks=1 00:11:24.489 00:11:24.489 ' 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:24.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.489 --rc genhtml_branch_coverage=1 00:11:24.489 --rc genhtml_function_coverage=1 00:11:24.489 --rc genhtml_legend=1 00:11:24.489 --rc geninfo_all_blocks=1 00:11:24.489 --rc geninfo_unexecuted_blocks=1 00:11:24.489 00:11:24.489 ' 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:24.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.489 --rc genhtml_branch_coverage=1 00:11:24.489 --rc genhtml_function_coverage=1 00:11:24.489 --rc genhtml_legend=1 00:11:24.489 --rc geninfo_all_blocks=1 00:11:24.489 --rc geninfo_unexecuted_blocks=1 00:11:24.489 00:11:24.489 ' 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:24.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.489 --rc genhtml_branch_coverage=1 00:11:24.489 --rc genhtml_function_coverage=1 00:11:24.489 --rc genhtml_legend=1 00:11:24.489 --rc geninfo_all_blocks=1 00:11:24.489 --rc geninfo_unexecuted_blocks=1 00:11:24.489 00:11:24.489 ' 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:24.489 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:24.749 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:24.750 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:24.750 Cannot find device "nvmf_init_br" 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:24.750 Cannot find device "nvmf_init_br2" 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:24.750 Cannot find device "nvmf_tgt_br" 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:24.750 Cannot find device "nvmf_tgt_br2" 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:24.750 Cannot find device "nvmf_init_br" 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:24.750 Cannot find device "nvmf_init_br2" 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:24.750 Cannot find device "nvmf_tgt_br" 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:24.750 Cannot find device "nvmf_tgt_br2" 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:24.750 Cannot find device "nvmf_br" 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:24.750 Cannot find device "nvmf_init_if" 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:24.750 Cannot find device "nvmf_init_if2" 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:24.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:24.750 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:24.750 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:25.010 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:25.010 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:11:25.010 00:11:25.010 --- 10.0.0.3 ping statistics --- 00:11:25.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.010 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:25.010 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:25.010 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:11:25.010 00:11:25.010 --- 10.0.0.4 ping statistics --- 00:11:25.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.010 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:25.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:11:25.010 00:11:25.010 --- 10.0.0.1 ping statistics --- 00:11:25.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.010 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:25.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:11:25.010 00:11:25.010 --- 10.0.0.2 ping statistics --- 00:11:25.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.010 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=79718 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 79718 00:11:25.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 79718 ']' 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:25.010 02:15:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.010 [2024-11-08 02:15:26.866874] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:25.010 [2024-11-08 02:15:26.866990] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.270 [2024-11-08 02:15:27.010668] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.270 [2024-11-08 02:15:27.051118] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.270 [2024-11-08 02:15:27.051185] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.270 [2024-11-08 02:15:27.051200] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.270 [2024-11-08 02:15:27.051211] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.270 [2024-11-08 02:15:27.051220] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.270 [2024-11-08 02:15:27.051388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:25.270 [2024-11-08 02:15:27.052151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:25.270 [2024-11-08 02:15:27.052223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.270 [2024-11-08 02:15:27.052222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:25.270 [2024-11-08 02:15:27.085308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 [2024-11-08 02:15:27.920209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 Malloc0 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.206 [2024-11-08 02:15:27.966546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:26.206 { 00:11:26.206 "params": { 00:11:26.206 "name": "Nvme$subsystem", 00:11:26.206 "trtype": "$TEST_TRANSPORT", 00:11:26.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:26.206 "adrfam": "ipv4", 00:11:26.206 "trsvcid": "$NVMF_PORT", 00:11:26.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:26.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:26.206 "hdgst": ${hdgst:-false}, 00:11:26.206 "ddgst": ${ddgst:-false} 00:11:26.206 }, 00:11:26.206 "method": "bdev_nvme_attach_controller" 00:11:26.206 } 00:11:26.206 EOF 00:11:26.206 )") 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:11:26.206 02:15:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:26.206 "params": { 00:11:26.206 "name": "Nvme1", 00:11:26.206 "trtype": "tcp", 00:11:26.207 "traddr": "10.0.0.3", 00:11:26.207 "adrfam": "ipv4", 00:11:26.207 "trsvcid": "4420", 00:11:26.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:26.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:26.207 "hdgst": false, 00:11:26.207 "ddgst": false 00:11:26.207 }, 00:11:26.207 "method": "bdev_nvme_attach_controller" 00:11:26.207 }' 00:11:26.207 [2024-11-08 02:15:28.029830] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:26.207 [2024-11-08 02:15:28.029946] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79754 ] 00:11:26.465 [2024-11-08 02:15:28.174351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:26.465 [2024-11-08 02:15:28.215146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.465 [2024-11-08 02:15:28.215241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.465 [2024-11-08 02:15:28.215619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.465 [2024-11-08 02:15:28.254193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:26.724 I/O targets: 00:11:26.724 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:26.724 00:11:26.724 00:11:26.724 CUnit - A unit testing framework for C - Version 2.1-3 00:11:26.724 http://cunit.sourceforge.net/ 00:11:26.724 00:11:26.724 00:11:26.724 Suite: bdevio tests on: Nvme1n1 00:11:26.724 Test: blockdev write read block ...passed 00:11:26.724 Test: blockdev write zeroes read block ...passed 00:11:26.724 Test: blockdev write zeroes read no split ...passed 00:11:26.724 Test: blockdev write zeroes read split ...passed 00:11:26.724 Test: blockdev write zeroes read split partial ...passed 00:11:26.724 Test: blockdev reset ...[2024-11-08 02:15:28.386323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:26.724 [2024-11-08 02:15:28.386764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e4b40 (9): Bad file descriptor 00:11:26.724 [2024-11-08 02:15:28.402912] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:26.724 passed 00:11:26.724 Test: blockdev write read 8 blocks ...passed 00:11:26.724 Test: blockdev write read size > 128k ...passed 00:11:26.724 Test: blockdev write read invalid size ...passed 00:11:26.724 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:26.724 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:26.724 Test: blockdev write read max offset ...passed 00:11:26.724 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:26.724 Test: blockdev writev readv 8 blocks ...passed 00:11:26.724 Test: blockdev writev readv 30 x 1block ...passed 00:11:26.724 Test: blockdev writev readv block ...passed 00:11:26.724 Test: blockdev writev readv size > 128k ...passed 00:11:26.724 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:26.724 Test: blockdev comparev and writev ...[2024-11-08 02:15:28.412939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.724 [2024-11-08 02:15:28.413155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:26.724 [2024-11-08 02:15:28.413190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.724 [2024-11-08 02:15:28.413204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:26.724 [2024-11-08 02:15:28.413537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.724 [2024-11-08 02:15:28.413565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:26.724 [2024-11-08 02:15:28.413587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.724 [2024-11-08 02:15:28.413600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:26.724 [2024-11-08 02:15:28.413893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.724 [2024-11-08 02:15:28.413920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:26.724 [2024-11-08 02:15:28.413942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.724 [2024-11-08 02:15:28.413954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:26.724 [2024-11-08 02:15:28.414295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.724 [2024-11-08 02:15:28.414554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:26.724 [2024-11-08 02:15:28.414717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:26.724 [2024-11-08 02:15:28.414886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:26.724 passed 00:11:26.724 Test: blockdev nvme passthru rw ...passed 00:11:26.724 Test: blockdev nvme passthru vendor specific ...[2024-11-08 02:15:28.416119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:26.724 [2024-11-08 02:15:28.416149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:26.724 passed 00:11:26.724 Test: blockdev nvme admin passthru ...[2024-11-08 02:15:28.416281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:26.724 [2024-11-08 02:15:28.416314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:26.724 [2024-11-08 02:15:28.416441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:26.724 [2024-11-08 02:15:28.416467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:26.724 [2024-11-08 02:15:28.416580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:26.724 [2024-11-08 02:15:28.416605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:26.724 passed 00:11:26.724 Test: blockdev copy ...passed 00:11:26.724 00:11:26.724 Run Summary: Type Total Ran Passed Failed Inactive 00:11:26.724 suites 1 1 n/a 0 0 00:11:26.724 tests 23 23 23 0 0 00:11:26.724 asserts 152 152 152 0 n/a 00:11:26.724 00:11:26.724 Elapsed time = 0.146 seconds 00:11:26.724 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.724 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.724 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.724 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.724 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:26.724 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:26.724 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:26.724 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.984 rmmod nvme_tcp 00:11:26.984 rmmod nvme_fabrics 00:11:26.984 rmmod nvme_keyring 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 79718 ']' 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 79718 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 79718 ']' 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 79718 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79718 00:11:26.984 killing process with pid 79718 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79718' 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 79718 00:11:26.984 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 79718 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:27.243 02:15:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:27.243 02:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:27.243 02:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:27.243 02:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.243 02:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.243 02:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:27.243 02:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.503 02:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.503 02:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.503 02:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:27.503 00:11:27.503 real 0m2.969s 00:11:27.503 user 0m8.636s 00:11:27.503 sys 0m0.762s 00:11:27.503 ************************************ 00:11:27.503 END TEST nvmf_bdevio 00:11:27.503 ************************************ 00:11:27.503 02:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.503 02:15:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.503 02:15:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:27.503 ************************************ 00:11:27.503 END TEST nvmf_target_core 00:11:27.503 ************************************ 00:11:27.503 00:11:27.503 real 2m29.181s 00:11:27.503 user 6m32.064s 00:11:27.503 sys 0m52.537s 00:11:27.503 02:15:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.503 02:15:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:27.503 02:15:29 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:27.503 02:15:29 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.503 02:15:29 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.503 02:15:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:27.503 ************************************ 00:11:27.503 START TEST nvmf_target_extra 00:11:27.503 ************************************ 00:11:27.503 02:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:27.503 * Looking for test storage... 00:11:27.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:27.503 02:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:27.503 02:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:27.503 02:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:27.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.763 --rc genhtml_branch_coverage=1 00:11:27.763 --rc genhtml_function_coverage=1 00:11:27.763 --rc genhtml_legend=1 00:11:27.763 --rc geninfo_all_blocks=1 00:11:27.763 --rc geninfo_unexecuted_blocks=1 00:11:27.763 00:11:27.763 ' 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:27.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.763 --rc genhtml_branch_coverage=1 00:11:27.763 --rc genhtml_function_coverage=1 00:11:27.763 --rc genhtml_legend=1 00:11:27.763 --rc geninfo_all_blocks=1 00:11:27.763 --rc geninfo_unexecuted_blocks=1 00:11:27.763 00:11:27.763 ' 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:27.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.763 --rc genhtml_branch_coverage=1 00:11:27.763 --rc genhtml_function_coverage=1 00:11:27.763 --rc genhtml_legend=1 00:11:27.763 --rc geninfo_all_blocks=1 00:11:27.763 --rc geninfo_unexecuted_blocks=1 00:11:27.763 00:11:27.763 ' 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:27.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.763 --rc genhtml_branch_coverage=1 00:11:27.763 --rc genhtml_function_coverage=1 00:11:27.763 --rc genhtml_legend=1 00:11:27.763 --rc geninfo_all_blocks=1 00:11:27.763 --rc geninfo_unexecuted_blocks=1 00:11:27.763 00:11:27.763 ' 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.763 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.764 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.764 ************************************ 00:11:27.764 START TEST nvmf_auth_target 00:11:27.764 ************************************ 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:27.764 * Looking for test storage... 00:11:27.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:27.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.764 --rc genhtml_branch_coverage=1 00:11:27.764 --rc genhtml_function_coverage=1 00:11:27.764 --rc genhtml_legend=1 00:11:27.764 --rc geninfo_all_blocks=1 00:11:27.764 --rc geninfo_unexecuted_blocks=1 00:11:27.764 00:11:27.764 ' 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:27.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.764 --rc genhtml_branch_coverage=1 00:11:27.764 --rc genhtml_function_coverage=1 00:11:27.764 --rc genhtml_legend=1 00:11:27.764 --rc geninfo_all_blocks=1 00:11:27.764 --rc geninfo_unexecuted_blocks=1 00:11:27.764 00:11:27.764 ' 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:27.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.764 --rc genhtml_branch_coverage=1 00:11:27.764 --rc genhtml_function_coverage=1 00:11:27.764 --rc genhtml_legend=1 00:11:27.764 --rc geninfo_all_blocks=1 00:11:27.764 --rc geninfo_unexecuted_blocks=1 00:11:27.764 00:11:27.764 ' 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:27.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.764 --rc genhtml_branch_coverage=1 00:11:27.764 --rc genhtml_function_coverage=1 00:11:27.764 --rc genhtml_legend=1 00:11:27.764 --rc geninfo_all_blocks=1 00:11:27.764 --rc geninfo_unexecuted_blocks=1 00:11:27.764 00:11:27.764 ' 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.764 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.023 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:28.023 Cannot find device "nvmf_init_br" 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:28.023 Cannot find device "nvmf_init_br2" 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:28.023 Cannot find device "nvmf_tgt_br" 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:28.023 Cannot find device "nvmf_tgt_br2" 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:28.023 Cannot find device "nvmf_init_br" 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:28.023 Cannot find device "nvmf_init_br2" 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:28.023 Cannot find device "nvmf_tgt_br" 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:28.023 Cannot find device "nvmf_tgt_br2" 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:28.023 Cannot find device "nvmf_br" 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:28.023 Cannot find device "nvmf_init_if" 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:28.023 Cannot find device "nvmf_init_if2" 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:28.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:28.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:28.023 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:28.024 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:28.024 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:28.024 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:28.282 02:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:28.282 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:28.282 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:11:28.282 00:11:28.282 --- 10.0.0.3 ping statistics --- 00:11:28.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.282 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:28.282 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:28.282 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:11:28.282 00:11:28.282 --- 10.0.0.4 ping statistics --- 00:11:28.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.282 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:28.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:28.282 00:11:28.282 --- 10.0.0.1 ping statistics --- 00:11:28.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.282 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:28.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:11:28.282 00:11:28.282 --- 10.0.0.2 ping statistics --- 00:11:28.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.282 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:28.282 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:28.283 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.283 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.283 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=80038 00:11:28.283 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 80038 00:11:28.283 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 80038 ']' 00:11:28.283 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:28.283 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.283 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.283 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.283 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.283 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=80062 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=a2f45f98d42c6a69195207c015a1be53de223cd2db73fc67 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.Gd9 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key a2f45f98d42c6a69195207c015a1be53de223cd2db73fc67 0 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 a2f45f98d42c6a69195207c015a1be53de223cd2db73fc67 0 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=a2f45f98d42c6a69195207c015a1be53de223cd2db73fc67 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.Gd9 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.Gd9 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Gd9 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=a592161dfb3e080fcb7ca08d53f1e226b9d24c63cd2d15d1019eeb0ac5d72eb3 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.nqr 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key a592161dfb3e080fcb7ca08d53f1e226b9d24c63cd2d15d1019eeb0ac5d72eb3 3 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 a592161dfb3e080fcb7ca08d53f1e226b9d24c63cd2d15d1019eeb0ac5d72eb3 3 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=a592161dfb3e080fcb7ca08d53f1e226b9d24c63cd2d15d1019eeb0ac5d72eb3 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.nqr 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.nqr 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.nqr 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=49262a0b373a42c7fb4e9a1d0e78cc1e 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:11:28.850 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.uoF 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 49262a0b373a42c7fb4e9a1d0e78cc1e 1 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 49262a0b373a42c7fb4e9a1d0e78cc1e 1 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=49262a0b373a42c7fb4e9a1d0e78cc1e 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.uoF 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.uoF 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.uoF 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=4dbb1d73b2354ecd6455f49523a1c77eb5db5fe507f2fb14 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.cbl 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 4dbb1d73b2354ecd6455f49523a1c77eb5db5fe507f2fb14 2 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 4dbb1d73b2354ecd6455f49523a1c77eb5db5fe507f2fb14 2 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=4dbb1d73b2354ecd6455f49523a1c77eb5db5fe507f2fb14 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:11:28.851 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:29.110 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.cbl 00:11:29.110 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.cbl 00:11:29.110 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.cbl 00:11:29.110 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:29.110 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:29.110 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:29.110 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:29.110 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=5b774109a37105bc535760a91c1f2a09f751e6dea1e77dc3 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.ucm 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 5b774109a37105bc535760a91c1f2a09f751e6dea1e77dc3 2 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 5b774109a37105bc535760a91c1f2a09f751e6dea1e77dc3 2 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=5b774109a37105bc535760a91c1f2a09f751e6dea1e77dc3 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.ucm 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.ucm 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ucm 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=cca858a839f4bad9a65fc2b8d3beb5e1 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.PeO 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key cca858a839f4bad9a65fc2b8d3beb5e1 1 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 cca858a839f4bad9a65fc2b8d3beb5e1 1 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=cca858a839f4bad9a65fc2b8d3beb5e1 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.PeO 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.PeO 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.PeO 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=616ec4dcf45f2d46c4b9ed9d1b963c75aaf23430d5e1f359d44e097dad14dfc0 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.tcs 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 616ec4dcf45f2d46c4b9ed9d1b963c75aaf23430d5e1f359d44e097dad14dfc0 3 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 616ec4dcf45f2d46c4b9ed9d1b963c75aaf23430d5e1f359d44e097dad14dfc0 3 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=616ec4dcf45f2d46c4b9ed9d1b963c75aaf23430d5e1f359d44e097dad14dfc0 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.tcs 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.tcs 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.tcs 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 80038 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 80038 ']' 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.111 02:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.678 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:29.678 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:29.678 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 80062 /var/tmp/host.sock 00:11:29.678 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 80062 ']' 00:11:29.678 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:11:29.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:29.678 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.678 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:29.678 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.678 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.937 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:29.937 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:29.937 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:29.937 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.937 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.937 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.937 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:29.937 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Gd9 00:11:29.937 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.937 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.937 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.937 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Gd9 00:11:29.938 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Gd9 00:11:30.196 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.nqr ]] 00:11:30.196 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nqr 00:11:30.196 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.196 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.196 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.196 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nqr 00:11:30.196 02:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nqr 00:11:30.454 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:30.454 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uoF 00:11:30.454 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.454 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.454 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.454 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.uoF 00:11:30.454 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.uoF 00:11:30.713 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.cbl ]] 00:11:30.713 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cbl 00:11:30.713 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.713 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.713 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.713 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cbl 00:11:30.713 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cbl 00:11:30.972 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:30.972 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ucm 00:11:30.972 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.972 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.972 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.972 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ucm 00:11:30.972 02:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ucm 00:11:31.233 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.PeO ]] 00:11:31.233 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PeO 00:11:31.233 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.233 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.233 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.233 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PeO 00:11:31.233 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PeO 00:11:31.493 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:31.493 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tcs 00:11:31.493 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.493 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.493 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.493 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.tcs 00:11:31.493 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.tcs 00:11:31.752 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:31.752 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:31.752 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:31.752 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.752 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:31.752 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:32.319 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:32.319 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.319 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:32.319 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:32.319 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:32.319 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.319 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.319 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.319 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.319 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.319 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.319 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.319 02:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.577 00:11:32.577 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.577 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.577 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.835 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.835 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.835 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.835 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.835 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.835 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.835 { 00:11:32.835 "cntlid": 1, 00:11:32.835 "qid": 0, 00:11:32.835 "state": "enabled", 00:11:32.835 "thread": "nvmf_tgt_poll_group_000", 00:11:32.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:11:32.835 "listen_address": { 00:11:32.835 "trtype": "TCP", 00:11:32.835 "adrfam": "IPv4", 00:11:32.835 "traddr": "10.0.0.3", 00:11:32.835 "trsvcid": "4420" 00:11:32.835 }, 00:11:32.836 "peer_address": { 00:11:32.836 "trtype": "TCP", 00:11:32.836 "adrfam": "IPv4", 00:11:32.836 "traddr": "10.0.0.1", 00:11:32.836 "trsvcid": "58526" 00:11:32.836 }, 00:11:32.836 "auth": { 00:11:32.836 "state": "completed", 00:11:32.836 "digest": "sha256", 00:11:32.836 "dhgroup": "null" 00:11:32.836 } 00:11:32.836 } 00:11:32.836 ]' 00:11:32.836 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.836 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:32.836 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.836 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:32.836 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.094 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.094 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.094 02:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.353 02:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:11:33.353 02:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.648 02:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.648 00:11:38.648 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.648 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.648 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.907 { 00:11:38.907 "cntlid": 3, 00:11:38.907 "qid": 0, 00:11:38.907 "state": "enabled", 00:11:38.907 "thread": "nvmf_tgt_poll_group_000", 00:11:38.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:11:38.907 "listen_address": { 00:11:38.907 "trtype": "TCP", 00:11:38.907 "adrfam": "IPv4", 00:11:38.907 "traddr": "10.0.0.3", 00:11:38.907 "trsvcid": "4420" 00:11:38.907 }, 00:11:38.907 "peer_address": { 00:11:38.907 "trtype": "TCP", 00:11:38.907 "adrfam": "IPv4", 00:11:38.907 "traddr": "10.0.0.1", 00:11:38.907 "trsvcid": "34870" 00:11:38.907 }, 00:11:38.907 "auth": { 00:11:38.907 "state": "completed", 00:11:38.907 "digest": "sha256", 00:11:38.907 "dhgroup": "null" 00:11:38.907 } 00:11:38.907 } 00:11:38.907 ]' 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.907 02:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.166 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:11:39.166 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:11:40.103 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.103 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:40.103 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.103 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.103 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.103 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.103 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:40.103 02:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:40.362 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:40.362 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.362 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:40.362 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:40.362 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:40.362 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.362 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.362 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.362 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.362 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.362 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.362 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.362 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.621 00:11:40.621 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.621 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.621 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.880 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.880 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.880 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.880 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.880 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.880 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.880 { 00:11:40.880 "cntlid": 5, 00:11:40.880 "qid": 0, 00:11:40.880 "state": "enabled", 00:11:40.880 "thread": "nvmf_tgt_poll_group_000", 00:11:40.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:11:40.880 "listen_address": { 00:11:40.880 "trtype": "TCP", 00:11:40.880 "adrfam": "IPv4", 00:11:40.880 "traddr": "10.0.0.3", 00:11:40.880 "trsvcid": "4420" 00:11:40.880 }, 00:11:40.880 "peer_address": { 00:11:40.880 "trtype": "TCP", 00:11:40.880 "adrfam": "IPv4", 00:11:40.880 "traddr": "10.0.0.1", 00:11:40.880 "trsvcid": "34898" 00:11:40.880 }, 00:11:40.880 "auth": { 00:11:40.880 "state": "completed", 00:11:40.880 "digest": "sha256", 00:11:40.880 "dhgroup": "null" 00:11:40.880 } 00:11:40.880 } 00:11:40.880 ]' 00:11:40.880 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.140 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:41.140 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.140 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:41.140 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.140 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.140 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.140 02:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.399 02:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:11:41.399 02:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:11:42.337 02:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.337 02:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:42.337 02:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.337 02:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.337 02:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.337 02:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.337 02:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:42.337 02:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:42.337 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:42.337 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.337 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:42.337 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:42.337 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:42.337 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.337 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:11:42.337 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.337 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.337 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.337 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:42.337 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:42.337 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:42.905 00:11:42.905 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.905 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.905 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.164 { 00:11:43.164 "cntlid": 7, 00:11:43.164 "qid": 0, 00:11:43.164 "state": "enabled", 00:11:43.164 "thread": "nvmf_tgt_poll_group_000", 00:11:43.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:11:43.164 "listen_address": { 00:11:43.164 "trtype": "TCP", 00:11:43.164 "adrfam": "IPv4", 00:11:43.164 "traddr": "10.0.0.3", 00:11:43.164 "trsvcid": "4420" 00:11:43.164 }, 00:11:43.164 "peer_address": { 00:11:43.164 "trtype": "TCP", 00:11:43.164 "adrfam": "IPv4", 00:11:43.164 "traddr": "10.0.0.1", 00:11:43.164 "trsvcid": "34914" 00:11:43.164 }, 00:11:43.164 "auth": { 00:11:43.164 "state": "completed", 00:11:43.164 "digest": "sha256", 00:11:43.164 "dhgroup": "null" 00:11:43.164 } 00:11:43.164 } 00:11:43.164 ]' 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.164 02:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.424 02:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:11:43.424 02:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:11:44.362 02:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.362 02:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:44.362 02:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.362 02:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.362 02:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.362 02:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:44.362 02:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.362 02:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:44.362 02:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:44.621 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:44.621 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.621 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:44.621 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:44.621 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:44.621 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.621 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.621 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.622 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.622 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.622 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.622 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.622 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.881 00:11:44.881 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.881 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.881 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.141 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.141 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.141 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.141 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.141 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.141 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.141 { 00:11:45.141 "cntlid": 9, 00:11:45.141 "qid": 0, 00:11:45.141 "state": "enabled", 00:11:45.141 "thread": "nvmf_tgt_poll_group_000", 00:11:45.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:11:45.141 "listen_address": { 00:11:45.141 "trtype": "TCP", 00:11:45.141 "adrfam": "IPv4", 00:11:45.141 "traddr": "10.0.0.3", 00:11:45.141 "trsvcid": "4420" 00:11:45.141 }, 00:11:45.141 "peer_address": { 00:11:45.141 "trtype": "TCP", 00:11:45.141 "adrfam": "IPv4", 00:11:45.141 "traddr": "10.0.0.1", 00:11:45.141 "trsvcid": "34944" 00:11:45.141 }, 00:11:45.141 "auth": { 00:11:45.141 "state": "completed", 00:11:45.141 "digest": "sha256", 00:11:45.141 "dhgroup": "ffdhe2048" 00:11:45.141 } 00:11:45.141 } 00:11:45.141 ]' 00:11:45.141 02:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.141 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:45.141 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.400 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:45.400 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.400 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.400 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.400 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.659 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:11:45.659 02:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:11:46.227 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.228 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:46.228 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.228 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.228 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.228 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.228 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:46.228 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:46.795 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:46.795 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.795 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:46.795 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:46.795 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:46.795 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.795 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.795 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.795 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.795 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.795 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.795 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.795 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.054 00:11:47.054 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.054 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.054 02:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.314 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.314 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.314 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.314 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.314 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.314 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.314 { 00:11:47.314 "cntlid": 11, 00:11:47.314 "qid": 0, 00:11:47.314 "state": "enabled", 00:11:47.314 "thread": "nvmf_tgt_poll_group_000", 00:11:47.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:11:47.314 "listen_address": { 00:11:47.314 "trtype": "TCP", 00:11:47.314 "adrfam": "IPv4", 00:11:47.314 "traddr": "10.0.0.3", 00:11:47.314 "trsvcid": "4420" 00:11:47.314 }, 00:11:47.314 "peer_address": { 00:11:47.314 "trtype": "TCP", 00:11:47.314 "adrfam": "IPv4", 00:11:47.314 "traddr": "10.0.0.1", 00:11:47.314 "trsvcid": "34970" 00:11:47.314 }, 00:11:47.314 "auth": { 00:11:47.314 "state": "completed", 00:11:47.314 "digest": "sha256", 00:11:47.314 "dhgroup": "ffdhe2048" 00:11:47.314 } 00:11:47.314 } 00:11:47.314 ]' 00:11:47.314 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.314 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:47.314 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.314 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:47.314 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.573 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.573 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.573 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.833 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:11:47.833 02:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:11:48.400 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.400 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:48.400 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.400 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.400 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.400 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.400 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:48.400 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:48.659 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:48.659 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.659 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:48.659 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:48.659 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:48.659 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.659 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.659 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.659 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.659 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.659 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.659 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.659 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.227 00:11:49.227 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.227 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.227 02:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.486 { 00:11:49.486 "cntlid": 13, 00:11:49.486 "qid": 0, 00:11:49.486 "state": "enabled", 00:11:49.486 "thread": "nvmf_tgt_poll_group_000", 00:11:49.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:11:49.486 "listen_address": { 00:11:49.486 "trtype": "TCP", 00:11:49.486 "adrfam": "IPv4", 00:11:49.486 "traddr": "10.0.0.3", 00:11:49.486 "trsvcid": "4420" 00:11:49.486 }, 00:11:49.486 "peer_address": { 00:11:49.486 "trtype": "TCP", 00:11:49.486 "adrfam": "IPv4", 00:11:49.486 "traddr": "10.0.0.1", 00:11:49.486 "trsvcid": "52616" 00:11:49.486 }, 00:11:49.486 "auth": { 00:11:49.486 "state": "completed", 00:11:49.486 "digest": "sha256", 00:11:49.486 "dhgroup": "ffdhe2048" 00:11:49.486 } 00:11:49.486 } 00:11:49.486 ]' 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.486 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.745 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:11:49.745 02:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:11:50.312 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.570 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:50.570 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.570 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.570 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.570 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.570 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:50.570 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:50.828 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:50.828 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.828 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:50.828 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:50.828 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:50.828 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.828 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:11:50.828 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.828 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.828 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.828 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:50.828 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:50.828 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:51.086 00:11:51.086 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.086 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.086 02:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.400 { 00:11:51.400 "cntlid": 15, 00:11:51.400 "qid": 0, 00:11:51.400 "state": "enabled", 00:11:51.400 "thread": "nvmf_tgt_poll_group_000", 00:11:51.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:11:51.400 "listen_address": { 00:11:51.400 "trtype": "TCP", 00:11:51.400 "adrfam": "IPv4", 00:11:51.400 "traddr": "10.0.0.3", 00:11:51.400 "trsvcid": "4420" 00:11:51.400 }, 00:11:51.400 "peer_address": { 00:11:51.400 "trtype": "TCP", 00:11:51.400 "adrfam": "IPv4", 00:11:51.400 "traddr": "10.0.0.1", 00:11:51.400 "trsvcid": "52646" 00:11:51.400 }, 00:11:51.400 "auth": { 00:11:51.400 "state": "completed", 00:11:51.400 "digest": "sha256", 00:11:51.400 "dhgroup": "ffdhe2048" 00:11:51.400 } 00:11:51.400 } 00:11:51.400 ]' 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.400 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.659 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:11:51.660 02:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.597 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.598 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.598 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.598 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.598 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.598 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.598 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.205 00:11:53.205 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.205 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.205 02:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.476 { 00:11:53.476 "cntlid": 17, 00:11:53.476 "qid": 0, 00:11:53.476 "state": "enabled", 00:11:53.476 "thread": "nvmf_tgt_poll_group_000", 00:11:53.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:11:53.476 "listen_address": { 00:11:53.476 "trtype": "TCP", 00:11:53.476 "adrfam": "IPv4", 00:11:53.476 "traddr": "10.0.0.3", 00:11:53.476 "trsvcid": "4420" 00:11:53.476 }, 00:11:53.476 "peer_address": { 00:11:53.476 "trtype": "TCP", 00:11:53.476 "adrfam": "IPv4", 00:11:53.476 "traddr": "10.0.0.1", 00:11:53.476 "trsvcid": "52686" 00:11:53.476 }, 00:11:53.476 "auth": { 00:11:53.476 "state": "completed", 00:11:53.476 "digest": "sha256", 00:11:53.476 "dhgroup": "ffdhe3072" 00:11:53.476 } 00:11:53.476 } 00:11:53.476 ]' 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.476 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.044 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:11:54.044 02:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:11:54.612 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.612 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:54.612 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.612 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.612 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.612 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.612 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:54.612 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:54.871 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:54.872 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.872 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:54.872 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:54.872 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:54.872 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.872 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.872 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.872 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.872 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.872 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.872 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.872 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.130 00:11:55.130 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.130 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.130 02:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.389 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.389 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.389 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.389 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.389 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.389 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.389 { 00:11:55.389 "cntlid": 19, 00:11:55.389 "qid": 0, 00:11:55.389 "state": "enabled", 00:11:55.389 "thread": "nvmf_tgt_poll_group_000", 00:11:55.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:11:55.389 "listen_address": { 00:11:55.389 "trtype": "TCP", 00:11:55.389 "adrfam": "IPv4", 00:11:55.389 "traddr": "10.0.0.3", 00:11:55.389 "trsvcid": "4420" 00:11:55.389 }, 00:11:55.389 "peer_address": { 00:11:55.389 "trtype": "TCP", 00:11:55.389 "adrfam": "IPv4", 00:11:55.389 "traddr": "10.0.0.1", 00:11:55.389 "trsvcid": "52710" 00:11:55.389 }, 00:11:55.389 "auth": { 00:11:55.389 "state": "completed", 00:11:55.389 "digest": "sha256", 00:11:55.389 "dhgroup": "ffdhe3072" 00:11:55.389 } 00:11:55.389 } 00:11:55.389 ]' 00:11:55.389 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.389 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.389 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.649 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:55.649 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.649 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.649 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.649 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.908 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:11:55.908 02:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:11:56.477 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.477 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:56.477 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.477 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.477 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.477 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.477 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:56.477 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:56.735 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:56.735 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.735 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:56.736 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:56.736 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:56.736 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.736 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.736 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.736 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.995 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.995 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.995 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.995 02:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.254 00:11:57.254 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.254 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.254 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.513 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.513 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.513 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.513 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.513 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.513 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.513 { 00:11:57.513 "cntlid": 21, 00:11:57.513 "qid": 0, 00:11:57.513 "state": "enabled", 00:11:57.513 "thread": "nvmf_tgt_poll_group_000", 00:11:57.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:11:57.513 "listen_address": { 00:11:57.513 "trtype": "TCP", 00:11:57.513 "adrfam": "IPv4", 00:11:57.513 "traddr": "10.0.0.3", 00:11:57.513 "trsvcid": "4420" 00:11:57.513 }, 00:11:57.513 "peer_address": { 00:11:57.513 "trtype": "TCP", 00:11:57.513 "adrfam": "IPv4", 00:11:57.513 "traddr": "10.0.0.1", 00:11:57.513 "trsvcid": "52726" 00:11:57.513 }, 00:11:57.513 "auth": { 00:11:57.513 "state": "completed", 00:11:57.513 "digest": "sha256", 00:11:57.513 "dhgroup": "ffdhe3072" 00:11:57.513 } 00:11:57.513 } 00:11:57.513 ]' 00:11:57.513 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.513 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.513 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.772 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:57.772 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.772 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.772 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.772 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.030 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:11:58.030 02:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:11:58.597 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.597 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:11:58.597 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.597 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.855 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.855 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.855 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:58.855 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:58.855 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:58.856 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.856 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:58.856 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:58.856 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:58.856 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.856 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:11:58.856 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.856 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.856 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.856 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:58.856 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.856 02:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.423 00:11:59.423 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.424 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.424 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.682 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.682 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.682 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.682 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.682 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.682 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.682 { 00:11:59.682 "cntlid": 23, 00:11:59.682 "qid": 0, 00:11:59.682 "state": "enabled", 00:11:59.682 "thread": "nvmf_tgt_poll_group_000", 00:11:59.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:11:59.682 "listen_address": { 00:11:59.683 "trtype": "TCP", 00:11:59.683 "adrfam": "IPv4", 00:11:59.683 "traddr": "10.0.0.3", 00:11:59.683 "trsvcid": "4420" 00:11:59.683 }, 00:11:59.683 "peer_address": { 00:11:59.683 "trtype": "TCP", 00:11:59.683 "adrfam": "IPv4", 00:11:59.683 "traddr": "10.0.0.1", 00:11:59.683 "trsvcid": "54088" 00:11:59.683 }, 00:11:59.683 "auth": { 00:11:59.683 "state": "completed", 00:11:59.683 "digest": "sha256", 00:11:59.683 "dhgroup": "ffdhe3072" 00:11:59.683 } 00:11:59.683 } 00:11:59.683 ]' 00:11:59.683 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.683 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.683 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.683 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:59.683 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.683 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.683 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.683 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.942 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:11:59.942 02:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.878 02:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.137 00:12:01.396 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.396 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.396 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.654 { 00:12:01.654 "cntlid": 25, 00:12:01.654 "qid": 0, 00:12:01.654 "state": "enabled", 00:12:01.654 "thread": "nvmf_tgt_poll_group_000", 00:12:01.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:01.654 "listen_address": { 00:12:01.654 "trtype": "TCP", 00:12:01.654 "adrfam": "IPv4", 00:12:01.654 "traddr": "10.0.0.3", 00:12:01.654 "trsvcid": "4420" 00:12:01.654 }, 00:12:01.654 "peer_address": { 00:12:01.654 "trtype": "TCP", 00:12:01.654 "adrfam": "IPv4", 00:12:01.654 "traddr": "10.0.0.1", 00:12:01.654 "trsvcid": "54108" 00:12:01.654 }, 00:12:01.654 "auth": { 00:12:01.654 "state": "completed", 00:12:01.654 "digest": "sha256", 00:12:01.654 "dhgroup": "ffdhe4096" 00:12:01.654 } 00:12:01.654 } 00:12:01.654 ]' 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.654 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.913 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:01.913 02:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:02.849 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.849 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:02.849 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.849 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.849 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.849 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.849 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.850 02:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.108 00:12:03.367 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.367 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.367 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.627 { 00:12:03.627 "cntlid": 27, 00:12:03.627 "qid": 0, 00:12:03.627 "state": "enabled", 00:12:03.627 "thread": "nvmf_tgt_poll_group_000", 00:12:03.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:03.627 "listen_address": { 00:12:03.627 "trtype": "TCP", 00:12:03.627 "adrfam": "IPv4", 00:12:03.627 "traddr": "10.0.0.3", 00:12:03.627 "trsvcid": "4420" 00:12:03.627 }, 00:12:03.627 "peer_address": { 00:12:03.627 "trtype": "TCP", 00:12:03.627 "adrfam": "IPv4", 00:12:03.627 "traddr": "10.0.0.1", 00:12:03.627 "trsvcid": "54144" 00:12:03.627 }, 00:12:03.627 "auth": { 00:12:03.627 "state": "completed", 00:12:03.627 "digest": "sha256", 00:12:03.627 "dhgroup": "ffdhe4096" 00:12:03.627 } 00:12:03.627 } 00:12:03.627 ]' 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.627 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.886 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:03.886 02:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.823 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.392 00:12:05.392 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.392 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.392 02:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.392 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.392 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.392 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.392 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.392 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.392 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.392 { 00:12:05.392 "cntlid": 29, 00:12:05.392 "qid": 0, 00:12:05.392 "state": "enabled", 00:12:05.392 "thread": "nvmf_tgt_poll_group_000", 00:12:05.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:05.392 "listen_address": { 00:12:05.392 "trtype": "TCP", 00:12:05.392 "adrfam": "IPv4", 00:12:05.392 "traddr": "10.0.0.3", 00:12:05.392 "trsvcid": "4420" 00:12:05.392 }, 00:12:05.392 "peer_address": { 00:12:05.392 "trtype": "TCP", 00:12:05.392 "adrfam": "IPv4", 00:12:05.392 "traddr": "10.0.0.1", 00:12:05.392 "trsvcid": "54172" 00:12:05.392 }, 00:12:05.392 "auth": { 00:12:05.392 "state": "completed", 00:12:05.392 "digest": "sha256", 00:12:05.392 "dhgroup": "ffdhe4096" 00:12:05.392 } 00:12:05.392 } 00:12:05.392 ]' 00:12:05.392 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.651 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:05.651 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.651 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:05.651 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.651 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.651 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.651 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.910 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:05.910 02:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:06.478 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.478 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:06.478 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.478 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.478 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.478 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.478 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:06.478 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:06.737 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:06.737 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.737 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:06.737 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:06.737 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:06.737 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.737 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:12:06.737 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.737 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.737 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.737 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:06.737 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:06.737 02:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.304 00:12:07.304 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.304 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.304 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.563 { 00:12:07.563 "cntlid": 31, 00:12:07.563 "qid": 0, 00:12:07.563 "state": "enabled", 00:12:07.563 "thread": "nvmf_tgt_poll_group_000", 00:12:07.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:07.563 "listen_address": { 00:12:07.563 "trtype": "TCP", 00:12:07.563 "adrfam": "IPv4", 00:12:07.563 "traddr": "10.0.0.3", 00:12:07.563 "trsvcid": "4420" 00:12:07.563 }, 00:12:07.563 "peer_address": { 00:12:07.563 "trtype": "TCP", 00:12:07.563 "adrfam": "IPv4", 00:12:07.563 "traddr": "10.0.0.1", 00:12:07.563 "trsvcid": "54210" 00:12:07.563 }, 00:12:07.563 "auth": { 00:12:07.563 "state": "completed", 00:12:07.563 "digest": "sha256", 00:12:07.563 "dhgroup": "ffdhe4096" 00:12:07.563 } 00:12:07.563 } 00:12:07.563 ]' 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.563 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.822 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:07.822 02:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:08.755 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.755 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:08.755 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.755 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.755 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.755 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:08.755 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.755 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:08.755 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:09.026 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:09.026 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.026 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:09.026 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:09.026 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:09.026 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.026 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.026 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.026 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.026 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.026 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.026 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.026 02:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.286 00:12:09.286 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.286 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.286 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.545 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.545 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.545 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.545 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.545 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.545 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.545 { 00:12:09.545 "cntlid": 33, 00:12:09.545 "qid": 0, 00:12:09.545 "state": "enabled", 00:12:09.545 "thread": "nvmf_tgt_poll_group_000", 00:12:09.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:09.545 "listen_address": { 00:12:09.545 "trtype": "TCP", 00:12:09.545 "adrfam": "IPv4", 00:12:09.545 "traddr": "10.0.0.3", 00:12:09.545 "trsvcid": "4420" 00:12:09.545 }, 00:12:09.545 "peer_address": { 00:12:09.545 "trtype": "TCP", 00:12:09.545 "adrfam": "IPv4", 00:12:09.545 "traddr": "10.0.0.1", 00:12:09.545 "trsvcid": "38154" 00:12:09.545 }, 00:12:09.545 "auth": { 00:12:09.545 "state": "completed", 00:12:09.545 "digest": "sha256", 00:12:09.545 "dhgroup": "ffdhe6144" 00:12:09.545 } 00:12:09.545 } 00:12:09.545 ]' 00:12:09.545 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.545 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:09.804 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.805 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:09.805 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.805 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.805 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.805 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.063 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:10.063 02:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:10.631 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.631 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:10.631 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.631 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.631 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.631 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.631 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:10.631 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:10.890 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:12:10.890 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.890 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:10.890 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:10.890 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:10.890 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.890 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.890 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.890 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.890 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.890 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.890 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.890 02:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.455 00:12:11.455 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.455 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.455 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.713 { 00:12:11.713 "cntlid": 35, 00:12:11.713 "qid": 0, 00:12:11.713 "state": "enabled", 00:12:11.713 "thread": "nvmf_tgt_poll_group_000", 00:12:11.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:11.713 "listen_address": { 00:12:11.713 "trtype": "TCP", 00:12:11.713 "adrfam": "IPv4", 00:12:11.713 "traddr": "10.0.0.3", 00:12:11.713 "trsvcid": "4420" 00:12:11.713 }, 00:12:11.713 "peer_address": { 00:12:11.713 "trtype": "TCP", 00:12:11.713 "adrfam": "IPv4", 00:12:11.713 "traddr": "10.0.0.1", 00:12:11.713 "trsvcid": "38180" 00:12:11.713 }, 00:12:11.713 "auth": { 00:12:11.713 "state": "completed", 00:12:11.713 "digest": "sha256", 00:12:11.713 "dhgroup": "ffdhe6144" 00:12:11.713 } 00:12:11.713 } 00:12:11.713 ]' 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.713 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.278 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:12.278 02:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:12.844 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.844 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:12.844 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.844 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.844 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.844 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.844 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:12.844 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:13.103 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:12:13.103 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.103 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:13.103 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:13.103 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:13.103 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.103 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.103 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.103 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.103 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.103 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.103 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.103 02:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.670 00:12:13.671 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.671 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.671 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.931 { 00:12:13.931 "cntlid": 37, 00:12:13.931 "qid": 0, 00:12:13.931 "state": "enabled", 00:12:13.931 "thread": "nvmf_tgt_poll_group_000", 00:12:13.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:13.931 "listen_address": { 00:12:13.931 "trtype": "TCP", 00:12:13.931 "adrfam": "IPv4", 00:12:13.931 "traddr": "10.0.0.3", 00:12:13.931 "trsvcid": "4420" 00:12:13.931 }, 00:12:13.931 "peer_address": { 00:12:13.931 "trtype": "TCP", 00:12:13.931 "adrfam": "IPv4", 00:12:13.931 "traddr": "10.0.0.1", 00:12:13.931 "trsvcid": "38200" 00:12:13.931 }, 00:12:13.931 "auth": { 00:12:13.931 "state": "completed", 00:12:13.931 "digest": "sha256", 00:12:13.931 "dhgroup": "ffdhe6144" 00:12:13.931 } 00:12:13.931 } 00:12:13.931 ]' 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.931 02:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.189 02:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:14.189 02:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:15.124 02:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.124 02:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:15.124 02:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.124 02:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.124 02:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.124 02:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.124 02:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:15.124 02:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:15.383 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:12:15.383 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.383 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:15.383 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:15.383 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:15.383 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.383 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:12:15.383 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.383 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.383 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.383 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:15.383 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.383 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.641 00:12:15.641 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.641 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.641 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.208 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.208 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.208 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.208 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.208 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.208 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.208 { 00:12:16.208 "cntlid": 39, 00:12:16.208 "qid": 0, 00:12:16.209 "state": "enabled", 00:12:16.209 "thread": "nvmf_tgt_poll_group_000", 00:12:16.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:16.209 "listen_address": { 00:12:16.209 "trtype": "TCP", 00:12:16.209 "adrfam": "IPv4", 00:12:16.209 "traddr": "10.0.0.3", 00:12:16.209 "trsvcid": "4420" 00:12:16.209 }, 00:12:16.209 "peer_address": { 00:12:16.209 "trtype": "TCP", 00:12:16.209 "adrfam": "IPv4", 00:12:16.209 "traddr": "10.0.0.1", 00:12:16.209 "trsvcid": "38240" 00:12:16.209 }, 00:12:16.209 "auth": { 00:12:16.209 "state": "completed", 00:12:16.209 "digest": "sha256", 00:12:16.209 "dhgroup": "ffdhe6144" 00:12:16.209 } 00:12:16.209 } 00:12:16.209 ]' 00:12:16.209 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.209 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:16.209 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.209 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:16.209 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.209 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.209 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.209 02:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.468 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:16.468 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:17.034 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.293 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:17.293 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.293 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.293 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.293 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:17.293 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.293 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:17.293 02:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:17.551 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:17.551 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.551 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:17.551 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:17.551 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:17.551 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.551 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.551 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.551 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.551 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.551 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.551 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.551 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.118 00:12:18.118 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.118 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.118 02:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.376 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.376 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.376 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.376 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.376 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.376 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.376 { 00:12:18.376 "cntlid": 41, 00:12:18.376 "qid": 0, 00:12:18.376 "state": "enabled", 00:12:18.376 "thread": "nvmf_tgt_poll_group_000", 00:12:18.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:18.376 "listen_address": { 00:12:18.376 "trtype": "TCP", 00:12:18.376 "adrfam": "IPv4", 00:12:18.376 "traddr": "10.0.0.3", 00:12:18.376 "trsvcid": "4420" 00:12:18.376 }, 00:12:18.376 "peer_address": { 00:12:18.376 "trtype": "TCP", 00:12:18.376 "adrfam": "IPv4", 00:12:18.376 "traddr": "10.0.0.1", 00:12:18.376 "trsvcid": "38258" 00:12:18.376 }, 00:12:18.376 "auth": { 00:12:18.376 "state": "completed", 00:12:18.376 "digest": "sha256", 00:12:18.376 "dhgroup": "ffdhe8192" 00:12:18.376 } 00:12:18.376 } 00:12:18.376 ]' 00:12:18.376 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.376 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:18.376 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.634 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:18.634 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.634 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.634 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.634 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.893 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:18.893 02:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:19.460 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.460 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:19.460 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.461 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.461 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.461 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.461 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:19.461 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:19.719 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:19.719 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.719 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:19.719 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:19.719 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:19.719 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.719 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.719 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.719 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.719 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.719 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.719 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.719 02:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.655 00:12:20.655 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.655 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.655 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.655 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.655 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.655 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.655 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.655 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.655 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.655 { 00:12:20.655 "cntlid": 43, 00:12:20.655 "qid": 0, 00:12:20.655 "state": "enabled", 00:12:20.655 "thread": "nvmf_tgt_poll_group_000", 00:12:20.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:20.655 "listen_address": { 00:12:20.655 "trtype": "TCP", 00:12:20.655 "adrfam": "IPv4", 00:12:20.655 "traddr": "10.0.0.3", 00:12:20.655 "trsvcid": "4420" 00:12:20.655 }, 00:12:20.655 "peer_address": { 00:12:20.655 "trtype": "TCP", 00:12:20.655 "adrfam": "IPv4", 00:12:20.655 "traddr": "10.0.0.1", 00:12:20.655 "trsvcid": "45984" 00:12:20.655 }, 00:12:20.655 "auth": { 00:12:20.656 "state": "completed", 00:12:20.656 "digest": "sha256", 00:12:20.656 "dhgroup": "ffdhe8192" 00:12:20.656 } 00:12:20.656 } 00:12:20.656 ]' 00:12:20.656 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.914 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:20.914 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.914 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:20.914 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.914 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.914 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.914 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.172 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:21.172 02:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:21.736 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.736 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:21.736 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.736 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.736 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.736 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.736 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:21.736 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:21.994 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:12:21.994 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.994 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:21.994 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:21.994 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:21.994 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.994 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.994 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.994 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.994 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.994 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.994 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.994 02:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:22.560 00:12:22.560 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.560 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.560 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.819 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.819 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.819 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.819 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.819 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.819 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.819 { 00:12:22.819 "cntlid": 45, 00:12:22.819 "qid": 0, 00:12:22.819 "state": "enabled", 00:12:22.819 "thread": "nvmf_tgt_poll_group_000", 00:12:22.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:22.819 "listen_address": { 00:12:22.819 "trtype": "TCP", 00:12:22.819 "adrfam": "IPv4", 00:12:22.819 "traddr": "10.0.0.3", 00:12:22.819 "trsvcid": "4420" 00:12:22.819 }, 00:12:22.819 "peer_address": { 00:12:22.819 "trtype": "TCP", 00:12:22.819 "adrfam": "IPv4", 00:12:22.819 "traddr": "10.0.0.1", 00:12:22.819 "trsvcid": "46000" 00:12:22.819 }, 00:12:22.819 "auth": { 00:12:22.819 "state": "completed", 00:12:22.819 "digest": "sha256", 00:12:22.819 "dhgroup": "ffdhe8192" 00:12:22.819 } 00:12:22.819 } 00:12:22.819 ]' 00:12:22.819 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.098 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:23.098 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.098 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.098 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.098 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.098 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.098 02:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.361 02:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:23.361 02:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:23.928 02:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.928 02:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:23.928 02:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.928 02:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.928 02:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.928 02:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.928 02:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:23.928 02:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:24.187 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:24.187 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.187 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:24.187 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:24.187 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:24.187 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.187 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:12:24.187 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.187 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.187 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.187 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:24.187 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:24.187 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:24.755 00:12:24.755 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.755 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.755 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.322 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.322 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.322 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.322 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.322 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.322 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.322 { 00:12:25.322 "cntlid": 47, 00:12:25.322 "qid": 0, 00:12:25.322 "state": "enabled", 00:12:25.322 "thread": "nvmf_tgt_poll_group_000", 00:12:25.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:25.322 "listen_address": { 00:12:25.322 "trtype": "TCP", 00:12:25.322 "adrfam": "IPv4", 00:12:25.322 "traddr": "10.0.0.3", 00:12:25.322 "trsvcid": "4420" 00:12:25.322 }, 00:12:25.322 "peer_address": { 00:12:25.322 "trtype": "TCP", 00:12:25.322 "adrfam": "IPv4", 00:12:25.322 "traddr": "10.0.0.1", 00:12:25.322 "trsvcid": "46024" 00:12:25.322 }, 00:12:25.322 "auth": { 00:12:25.322 "state": "completed", 00:12:25.322 "digest": "sha256", 00:12:25.322 "dhgroup": "ffdhe8192" 00:12:25.322 } 00:12:25.322 } 00:12:25.322 ]' 00:12:25.322 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.322 02:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:25.322 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.322 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:25.322 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.322 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.322 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.322 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.581 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:25.581 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:26.147 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.147 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:26.147 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.147 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.147 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:26.147 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:26.147 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.147 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:26.147 02:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:26.406 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:26.406 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.406 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:26.406 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:26.406 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:26.406 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.664 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.664 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.664 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.664 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.664 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.664 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.664 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:26.923 00:12:26.923 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.923 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.923 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.182 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.182 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.182 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.182 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.182 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.182 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.182 { 00:12:27.182 "cntlid": 49, 00:12:27.182 "qid": 0, 00:12:27.182 "state": "enabled", 00:12:27.182 "thread": "nvmf_tgt_poll_group_000", 00:12:27.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:27.182 "listen_address": { 00:12:27.182 "trtype": "TCP", 00:12:27.182 "adrfam": "IPv4", 00:12:27.182 "traddr": "10.0.0.3", 00:12:27.182 "trsvcid": "4420" 00:12:27.182 }, 00:12:27.182 "peer_address": { 00:12:27.182 "trtype": "TCP", 00:12:27.182 "adrfam": "IPv4", 00:12:27.182 "traddr": "10.0.0.1", 00:12:27.182 "trsvcid": "46068" 00:12:27.182 }, 00:12:27.182 "auth": { 00:12:27.182 "state": "completed", 00:12:27.182 "digest": "sha384", 00:12:27.182 "dhgroup": "null" 00:12:27.182 } 00:12:27.182 } 00:12:27.182 ]' 00:12:27.182 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.182 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:27.182 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.182 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:27.182 02:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.182 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.182 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.182 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.441 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:27.441 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:28.009 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.009 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:28.009 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.009 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.009 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.009 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.009 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:28.009 02:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:28.576 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:28.576 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.576 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:28.576 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:28.576 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:28.576 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.576 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.576 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.576 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.576 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.576 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.576 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.576 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:28.835 00:12:28.835 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.835 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.835 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.094 { 00:12:29.094 "cntlid": 51, 00:12:29.094 "qid": 0, 00:12:29.094 "state": "enabled", 00:12:29.094 "thread": "nvmf_tgt_poll_group_000", 00:12:29.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:29.094 "listen_address": { 00:12:29.094 "trtype": "TCP", 00:12:29.094 "adrfam": "IPv4", 00:12:29.094 "traddr": "10.0.0.3", 00:12:29.094 "trsvcid": "4420" 00:12:29.094 }, 00:12:29.094 "peer_address": { 00:12:29.094 "trtype": "TCP", 00:12:29.094 "adrfam": "IPv4", 00:12:29.094 "traddr": "10.0.0.1", 00:12:29.094 "trsvcid": "46192" 00:12:29.094 }, 00:12:29.094 "auth": { 00:12:29.094 "state": "completed", 00:12:29.094 "digest": "sha384", 00:12:29.094 "dhgroup": "null" 00:12:29.094 } 00:12:29.094 } 00:12:29.094 ]' 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.094 02:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.353 02:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:29.353 02:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:29.921 02:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.181 02:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:30.181 02:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.181 02:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.181 02:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.181 02:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.181 02:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:30.181 02:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:30.441 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:30.441 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.441 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:30.441 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:30.441 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:30.441 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.441 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.441 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.441 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.441 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.441 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.441 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.441 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.700 00:12:30.700 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.700 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.700 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.960 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.960 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.960 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.960 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.960 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.960 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.960 { 00:12:30.960 "cntlid": 53, 00:12:30.960 "qid": 0, 00:12:30.960 "state": "enabled", 00:12:30.960 "thread": "nvmf_tgt_poll_group_000", 00:12:30.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:30.960 "listen_address": { 00:12:30.960 "trtype": "TCP", 00:12:30.960 "adrfam": "IPv4", 00:12:30.960 "traddr": "10.0.0.3", 00:12:30.960 "trsvcid": "4420" 00:12:30.960 }, 00:12:30.960 "peer_address": { 00:12:30.960 "trtype": "TCP", 00:12:30.960 "adrfam": "IPv4", 00:12:30.960 "traddr": "10.0.0.1", 00:12:30.960 "trsvcid": "46224" 00:12:30.960 }, 00:12:30.960 "auth": { 00:12:30.960 "state": "completed", 00:12:30.960 "digest": "sha384", 00:12:30.960 "dhgroup": "null" 00:12:30.960 } 00:12:30.960 } 00:12:30.960 ]' 00:12:30.960 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.960 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.960 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.220 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:31.220 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.220 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.220 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.220 02:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.479 02:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:31.479 02:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:32.048 02:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.048 02:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:32.048 02:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.048 02:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.048 02:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.048 02:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.048 02:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:32.048 02:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:32.308 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:32.308 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.308 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:32.308 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:32.308 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:32.308 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.308 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:12:32.308 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.308 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.308 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.308 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:32.308 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.308 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.876 00:12:32.876 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.876 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.876 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.135 { 00:12:33.135 "cntlid": 55, 00:12:33.135 "qid": 0, 00:12:33.135 "state": "enabled", 00:12:33.135 "thread": "nvmf_tgt_poll_group_000", 00:12:33.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:33.135 "listen_address": { 00:12:33.135 "trtype": "TCP", 00:12:33.135 "adrfam": "IPv4", 00:12:33.135 "traddr": "10.0.0.3", 00:12:33.135 "trsvcid": "4420" 00:12:33.135 }, 00:12:33.135 "peer_address": { 00:12:33.135 "trtype": "TCP", 00:12:33.135 "adrfam": "IPv4", 00:12:33.135 "traddr": "10.0.0.1", 00:12:33.135 "trsvcid": "46254" 00:12:33.135 }, 00:12:33.135 "auth": { 00:12:33.135 "state": "completed", 00:12:33.135 "digest": "sha384", 00:12:33.135 "dhgroup": "null" 00:12:33.135 } 00:12:33.135 } 00:12:33.135 ]' 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.135 02:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.394 02:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:33.394 02:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:34.331 02:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.331 02:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:34.331 02:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.331 02:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.331 02:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.331 02:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:34.331 02:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.331 02:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:34.331 02:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:34.331 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:34.331 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.331 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:34.331 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:34.331 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:34.331 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.331 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.331 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.331 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.331 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.331 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.331 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.331 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.590 00:12:34.849 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.849 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.849 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.108 { 00:12:35.108 "cntlid": 57, 00:12:35.108 "qid": 0, 00:12:35.108 "state": "enabled", 00:12:35.108 "thread": "nvmf_tgt_poll_group_000", 00:12:35.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:35.108 "listen_address": { 00:12:35.108 "trtype": "TCP", 00:12:35.108 "adrfam": "IPv4", 00:12:35.108 "traddr": "10.0.0.3", 00:12:35.108 "trsvcid": "4420" 00:12:35.108 }, 00:12:35.108 "peer_address": { 00:12:35.108 "trtype": "TCP", 00:12:35.108 "adrfam": "IPv4", 00:12:35.108 "traddr": "10.0.0.1", 00:12:35.108 "trsvcid": "46282" 00:12:35.108 }, 00:12:35.108 "auth": { 00:12:35.108 "state": "completed", 00:12:35.108 "digest": "sha384", 00:12:35.108 "dhgroup": "ffdhe2048" 00:12:35.108 } 00:12:35.108 } 00:12:35.108 ]' 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.108 02:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.367 02:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:35.367 02:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:36.304 02:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.304 02:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:36.304 02:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.304 02:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.304 02:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.304 02:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.304 02:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:36.304 02:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:36.564 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:36.564 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.564 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:36.564 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:36.564 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:36.564 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.564 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.564 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.564 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.564 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.564 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.564 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.564 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.822 00:12:36.822 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.822 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.822 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.081 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.081 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.081 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.081 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.081 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.081 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.081 { 00:12:37.081 "cntlid": 59, 00:12:37.081 "qid": 0, 00:12:37.081 "state": "enabled", 00:12:37.081 "thread": "nvmf_tgt_poll_group_000", 00:12:37.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:37.081 "listen_address": { 00:12:37.081 "trtype": "TCP", 00:12:37.081 "adrfam": "IPv4", 00:12:37.081 "traddr": "10.0.0.3", 00:12:37.081 "trsvcid": "4420" 00:12:37.081 }, 00:12:37.081 "peer_address": { 00:12:37.081 "trtype": "TCP", 00:12:37.081 "adrfam": "IPv4", 00:12:37.081 "traddr": "10.0.0.1", 00:12:37.081 "trsvcid": "46322" 00:12:37.081 }, 00:12:37.081 "auth": { 00:12:37.081 "state": "completed", 00:12:37.081 "digest": "sha384", 00:12:37.081 "dhgroup": "ffdhe2048" 00:12:37.081 } 00:12:37.081 } 00:12:37.081 ]' 00:12:37.081 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.081 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:37.081 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.081 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:37.081 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.348 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.348 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.348 02:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.623 02:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:37.623 02:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:38.191 02:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.191 02:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:38.191 02:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.191 02:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.191 02:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.191 02:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.191 02:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:38.191 02:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:38.458 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:38.458 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.458 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:38.458 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:38.458 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:38.458 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.458 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.458 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.458 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.458 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.458 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.458 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.458 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.720 00:12:38.720 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.720 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.720 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.979 { 00:12:38.979 "cntlid": 61, 00:12:38.979 "qid": 0, 00:12:38.979 "state": "enabled", 00:12:38.979 "thread": "nvmf_tgt_poll_group_000", 00:12:38.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:38.979 "listen_address": { 00:12:38.979 "trtype": "TCP", 00:12:38.979 "adrfam": "IPv4", 00:12:38.979 "traddr": "10.0.0.3", 00:12:38.979 "trsvcid": "4420" 00:12:38.979 }, 00:12:38.979 "peer_address": { 00:12:38.979 "trtype": "TCP", 00:12:38.979 "adrfam": "IPv4", 00:12:38.979 "traddr": "10.0.0.1", 00:12:38.979 "trsvcid": "35776" 00:12:38.979 }, 00:12:38.979 "auth": { 00:12:38.979 "state": "completed", 00:12:38.979 "digest": "sha384", 00:12:38.979 "dhgroup": "ffdhe2048" 00:12:38.979 } 00:12:38.979 } 00:12:38.979 ]' 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.979 02:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.546 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:39.546 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:40.114 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.114 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:40.114 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.114 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.114 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.114 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.114 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:40.114 02:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:40.373 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:40.373 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.373 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:40.373 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:40.373 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:40.373 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.373 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:12:40.373 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.373 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.373 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.373 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:40.373 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:40.373 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:40.632 00:12:40.632 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.632 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.632 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.891 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.891 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.891 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.891 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.891 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.891 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.891 { 00:12:40.891 "cntlid": 63, 00:12:40.891 "qid": 0, 00:12:40.891 "state": "enabled", 00:12:40.891 "thread": "nvmf_tgt_poll_group_000", 00:12:40.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:40.891 "listen_address": { 00:12:40.891 "trtype": "TCP", 00:12:40.891 "adrfam": "IPv4", 00:12:40.891 "traddr": "10.0.0.3", 00:12:40.891 "trsvcid": "4420" 00:12:40.891 }, 00:12:40.891 "peer_address": { 00:12:40.891 "trtype": "TCP", 00:12:40.891 "adrfam": "IPv4", 00:12:40.891 "traddr": "10.0.0.1", 00:12:40.891 "trsvcid": "35800" 00:12:40.891 }, 00:12:40.891 "auth": { 00:12:40.891 "state": "completed", 00:12:40.891 "digest": "sha384", 00:12:40.891 "dhgroup": "ffdhe2048" 00:12:40.891 } 00:12:40.891 } 00:12:40.891 ]' 00:12:40.891 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.891 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.891 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.150 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:41.150 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.150 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.150 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.150 02:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.410 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:41.410 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:41.978 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.978 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:41.978 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.978 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.978 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.978 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:41.978 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.978 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:41.978 02:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:42.237 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:42.237 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.237 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:42.237 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:42.237 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:42.237 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.237 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.237 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.237 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.237 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.237 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.237 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.237 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.497 00:12:42.757 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.757 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.757 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.016 { 00:12:43.016 "cntlid": 65, 00:12:43.016 "qid": 0, 00:12:43.016 "state": "enabled", 00:12:43.016 "thread": "nvmf_tgt_poll_group_000", 00:12:43.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:43.016 "listen_address": { 00:12:43.016 "trtype": "TCP", 00:12:43.016 "adrfam": "IPv4", 00:12:43.016 "traddr": "10.0.0.3", 00:12:43.016 "trsvcid": "4420" 00:12:43.016 }, 00:12:43.016 "peer_address": { 00:12:43.016 "trtype": "TCP", 00:12:43.016 "adrfam": "IPv4", 00:12:43.016 "traddr": "10.0.0.1", 00:12:43.016 "trsvcid": "35838" 00:12:43.016 }, 00:12:43.016 "auth": { 00:12:43.016 "state": "completed", 00:12:43.016 "digest": "sha384", 00:12:43.016 "dhgroup": "ffdhe3072" 00:12:43.016 } 00:12:43.016 } 00:12:43.016 ]' 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.016 02:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.274 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:43.274 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:43.841 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.099 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:44.099 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.099 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.099 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.099 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.099 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:44.099 02:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:44.358 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:44.358 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.358 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:44.358 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:44.358 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:44.358 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.358 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.358 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.358 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.358 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.358 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.358 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.358 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.617 00:12:44.617 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.617 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.617 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.876 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.876 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.876 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.876 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.876 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.876 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.876 { 00:12:44.876 "cntlid": 67, 00:12:44.876 "qid": 0, 00:12:44.876 "state": "enabled", 00:12:44.876 "thread": "nvmf_tgt_poll_group_000", 00:12:44.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:44.876 "listen_address": { 00:12:44.876 "trtype": "TCP", 00:12:44.876 "adrfam": "IPv4", 00:12:44.876 "traddr": "10.0.0.3", 00:12:44.876 "trsvcid": "4420" 00:12:44.876 }, 00:12:44.876 "peer_address": { 00:12:44.876 "trtype": "TCP", 00:12:44.876 "adrfam": "IPv4", 00:12:44.876 "traddr": "10.0.0.1", 00:12:44.876 "trsvcid": "35856" 00:12:44.876 }, 00:12:44.876 "auth": { 00:12:44.876 "state": "completed", 00:12:44.876 "digest": "sha384", 00:12:44.876 "dhgroup": "ffdhe3072" 00:12:44.876 } 00:12:44.876 } 00:12:44.876 ]' 00:12:44.876 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.876 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:44.876 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.134 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:45.134 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.134 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.134 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.135 02:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.393 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:45.393 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:45.961 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.961 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:45.961 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.961 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.961 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.961 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.961 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:45.961 02:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:46.219 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:46.219 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.219 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:46.219 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:46.219 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:46.219 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.219 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.219 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.219 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.219 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.219 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.219 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.220 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.786 00:12:46.786 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.786 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.786 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.046 { 00:12:47.046 "cntlid": 69, 00:12:47.046 "qid": 0, 00:12:47.046 "state": "enabled", 00:12:47.046 "thread": "nvmf_tgt_poll_group_000", 00:12:47.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:47.046 "listen_address": { 00:12:47.046 "trtype": "TCP", 00:12:47.046 "adrfam": "IPv4", 00:12:47.046 "traddr": "10.0.0.3", 00:12:47.046 "trsvcid": "4420" 00:12:47.046 }, 00:12:47.046 "peer_address": { 00:12:47.046 "trtype": "TCP", 00:12:47.046 "adrfam": "IPv4", 00:12:47.046 "traddr": "10.0.0.1", 00:12:47.046 "trsvcid": "35878" 00:12:47.046 }, 00:12:47.046 "auth": { 00:12:47.046 "state": "completed", 00:12:47.046 "digest": "sha384", 00:12:47.046 "dhgroup": "ffdhe3072" 00:12:47.046 } 00:12:47.046 } 00:12:47.046 ]' 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.046 02:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.305 02:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:47.305 02:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:48.240 02:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.241 02:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:48.241 02:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.241 02:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.241 02:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.241 02:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.241 02:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:48.241 02:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:48.241 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:48.241 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.241 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:48.241 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:48.241 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:48.241 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.241 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:12:48.241 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.241 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.499 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.499 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:48.499 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.499 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.758 00:12:48.758 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.758 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.758 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.017 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.017 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.017 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.017 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.017 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.017 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.017 { 00:12:49.017 "cntlid": 71, 00:12:49.017 "qid": 0, 00:12:49.017 "state": "enabled", 00:12:49.017 "thread": "nvmf_tgt_poll_group_000", 00:12:49.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:49.017 "listen_address": { 00:12:49.017 "trtype": "TCP", 00:12:49.017 "adrfam": "IPv4", 00:12:49.017 "traddr": "10.0.0.3", 00:12:49.017 "trsvcid": "4420" 00:12:49.017 }, 00:12:49.017 "peer_address": { 00:12:49.017 "trtype": "TCP", 00:12:49.017 "adrfam": "IPv4", 00:12:49.017 "traddr": "10.0.0.1", 00:12:49.017 "trsvcid": "52732" 00:12:49.017 }, 00:12:49.017 "auth": { 00:12:49.017 "state": "completed", 00:12:49.017 "digest": "sha384", 00:12:49.017 "dhgroup": "ffdhe3072" 00:12:49.017 } 00:12:49.017 } 00:12:49.017 ]' 00:12:49.017 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.276 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:49.276 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.276 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:49.276 02:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.276 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.276 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.276 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.534 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:49.534 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:50.101 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.101 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:50.101 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.101 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.101 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.101 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:50.101 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.101 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:50.101 02:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:50.360 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:50.360 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.360 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:50.360 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:50.360 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:50.360 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.360 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.360 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.360 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.619 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.619 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.619 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.619 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.877 00:12:50.877 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.877 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.877 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.137 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.137 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.137 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.137 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.137 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.137 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.137 { 00:12:51.137 "cntlid": 73, 00:12:51.137 "qid": 0, 00:12:51.137 "state": "enabled", 00:12:51.137 "thread": "nvmf_tgt_poll_group_000", 00:12:51.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:51.137 "listen_address": { 00:12:51.137 "trtype": "TCP", 00:12:51.137 "adrfam": "IPv4", 00:12:51.137 "traddr": "10.0.0.3", 00:12:51.137 "trsvcid": "4420" 00:12:51.137 }, 00:12:51.137 "peer_address": { 00:12:51.137 "trtype": "TCP", 00:12:51.137 "adrfam": "IPv4", 00:12:51.137 "traddr": "10.0.0.1", 00:12:51.137 "trsvcid": "52754" 00:12:51.137 }, 00:12:51.137 "auth": { 00:12:51.137 "state": "completed", 00:12:51.137 "digest": "sha384", 00:12:51.137 "dhgroup": "ffdhe4096" 00:12:51.137 } 00:12:51.137 } 00:12:51.137 ]' 00:12:51.137 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.137 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:51.137 02:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.396 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:51.396 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.396 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.396 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.396 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.655 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:51.655 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:12:52.338 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.338 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:52.338 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.338 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.338 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.338 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.338 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:52.338 02:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:52.597 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:52.597 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.597 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:52.597 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:52.597 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:52.597 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.597 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.597 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.597 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.597 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.597 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.597 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.597 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.856 00:12:52.856 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.856 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.856 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.114 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.114 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.114 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.114 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.114 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.114 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.114 { 00:12:53.114 "cntlid": 75, 00:12:53.114 "qid": 0, 00:12:53.114 "state": "enabled", 00:12:53.114 "thread": "nvmf_tgt_poll_group_000", 00:12:53.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:53.114 "listen_address": { 00:12:53.114 "trtype": "TCP", 00:12:53.114 "adrfam": "IPv4", 00:12:53.114 "traddr": "10.0.0.3", 00:12:53.114 "trsvcid": "4420" 00:12:53.114 }, 00:12:53.114 "peer_address": { 00:12:53.114 "trtype": "TCP", 00:12:53.114 "adrfam": "IPv4", 00:12:53.114 "traddr": "10.0.0.1", 00:12:53.114 "trsvcid": "52784" 00:12:53.114 }, 00:12:53.114 "auth": { 00:12:53.114 "state": "completed", 00:12:53.114 "digest": "sha384", 00:12:53.114 "dhgroup": "ffdhe4096" 00:12:53.114 } 00:12:53.114 } 00:12:53.114 ]' 00:12:53.114 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.114 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:53.114 02:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.373 02:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:53.373 02:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.373 02:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.373 02:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.373 02:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.632 02:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:53.632 02:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:12:54.199 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.199 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:54.199 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.199 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.459 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.459 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.459 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:54.459 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:54.719 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:54.719 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.719 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:54.719 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:54.719 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:54.719 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.719 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.719 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.719 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.719 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.719 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.719 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.719 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.286 00:12:55.286 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.286 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.286 02:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.545 { 00:12:55.545 "cntlid": 77, 00:12:55.545 "qid": 0, 00:12:55.545 "state": "enabled", 00:12:55.545 "thread": "nvmf_tgt_poll_group_000", 00:12:55.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:55.545 "listen_address": { 00:12:55.545 "trtype": "TCP", 00:12:55.545 "adrfam": "IPv4", 00:12:55.545 "traddr": "10.0.0.3", 00:12:55.545 "trsvcid": "4420" 00:12:55.545 }, 00:12:55.545 "peer_address": { 00:12:55.545 "trtype": "TCP", 00:12:55.545 "adrfam": "IPv4", 00:12:55.545 "traddr": "10.0.0.1", 00:12:55.545 "trsvcid": "52814" 00:12:55.545 }, 00:12:55.545 "auth": { 00:12:55.545 "state": "completed", 00:12:55.545 "digest": "sha384", 00:12:55.545 "dhgroup": "ffdhe4096" 00:12:55.545 } 00:12:55.545 } 00:12:55.545 ]' 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.545 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.804 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:55.804 02:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:56.740 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:57.307 00:12:57.307 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.307 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.307 02:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.566 { 00:12:57.566 "cntlid": 79, 00:12:57.566 "qid": 0, 00:12:57.566 "state": "enabled", 00:12:57.566 "thread": "nvmf_tgt_poll_group_000", 00:12:57.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:57.566 "listen_address": { 00:12:57.566 "trtype": "TCP", 00:12:57.566 "adrfam": "IPv4", 00:12:57.566 "traddr": "10.0.0.3", 00:12:57.566 "trsvcid": "4420" 00:12:57.566 }, 00:12:57.566 "peer_address": { 00:12:57.566 "trtype": "TCP", 00:12:57.566 "adrfam": "IPv4", 00:12:57.566 "traddr": "10.0.0.1", 00:12:57.566 "trsvcid": "52836" 00:12:57.566 }, 00:12:57.566 "auth": { 00:12:57.566 "state": "completed", 00:12:57.566 "digest": "sha384", 00:12:57.566 "dhgroup": "ffdhe4096" 00:12:57.566 } 00:12:57.566 } 00:12:57.566 ]' 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.566 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.825 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:57.825 02:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:12:58.763 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.763 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:12:58.763 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.763 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.763 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.763 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:58.763 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.763 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:58.763 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:59.022 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:59.022 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.022 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:59.022 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:59.022 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:59.022 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.022 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.022 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.022 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.022 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.022 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.022 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.022 02:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.284 00:12:59.284 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.284 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.284 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.851 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.851 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.851 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.851 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.851 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.851 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.851 { 00:12:59.851 "cntlid": 81, 00:12:59.851 "qid": 0, 00:12:59.851 "state": "enabled", 00:12:59.851 "thread": "nvmf_tgt_poll_group_000", 00:12:59.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:12:59.851 "listen_address": { 00:12:59.851 "trtype": "TCP", 00:12:59.851 "adrfam": "IPv4", 00:12:59.851 "traddr": "10.0.0.3", 00:12:59.851 "trsvcid": "4420" 00:12:59.851 }, 00:12:59.851 "peer_address": { 00:12:59.851 "trtype": "TCP", 00:12:59.851 "adrfam": "IPv4", 00:12:59.851 "traddr": "10.0.0.1", 00:12:59.851 "trsvcid": "49428" 00:12:59.851 }, 00:12:59.851 "auth": { 00:12:59.851 "state": "completed", 00:12:59.851 "digest": "sha384", 00:12:59.851 "dhgroup": "ffdhe6144" 00:12:59.851 } 00:12:59.851 } 00:12:59.851 ]' 00:12:59.851 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.851 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:59.852 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.852 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:59.852 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.852 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.852 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.852 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.111 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:00.111 02:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:00.678 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.678 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:00.678 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.678 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.678 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.678 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.678 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:00.678 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:00.937 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:00.937 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.937 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:00.937 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:00.937 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:00.937 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.937 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.937 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.937 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.937 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.937 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.937 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.937 02:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:01.504 00:13:01.504 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.504 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.504 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.762 { 00:13:01.762 "cntlid": 83, 00:13:01.762 "qid": 0, 00:13:01.762 "state": "enabled", 00:13:01.762 "thread": "nvmf_tgt_poll_group_000", 00:13:01.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:01.762 "listen_address": { 00:13:01.762 "trtype": "TCP", 00:13:01.762 "adrfam": "IPv4", 00:13:01.762 "traddr": "10.0.0.3", 00:13:01.762 "trsvcid": "4420" 00:13:01.762 }, 00:13:01.762 "peer_address": { 00:13:01.762 "trtype": "TCP", 00:13:01.762 "adrfam": "IPv4", 00:13:01.762 "traddr": "10.0.0.1", 00:13:01.762 "trsvcid": "49448" 00:13:01.762 }, 00:13:01.762 "auth": { 00:13:01.762 "state": "completed", 00:13:01.762 "digest": "sha384", 00:13:01.762 "dhgroup": "ffdhe6144" 00:13:01.762 } 00:13:01.762 } 00:13:01.762 ]' 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.762 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.328 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:02.328 02:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:02.896 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.896 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:02.896 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.896 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.896 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.897 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.897 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:02.897 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:03.156 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:03.156 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.156 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:03.156 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:03.156 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:03.156 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.156 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.156 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.156 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.156 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.156 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.156 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.156 02:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.724 00:13:03.724 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.724 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.724 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.983 { 00:13:03.983 "cntlid": 85, 00:13:03.983 "qid": 0, 00:13:03.983 "state": "enabled", 00:13:03.983 "thread": "nvmf_tgt_poll_group_000", 00:13:03.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:03.983 "listen_address": { 00:13:03.983 "trtype": "TCP", 00:13:03.983 "adrfam": "IPv4", 00:13:03.983 "traddr": "10.0.0.3", 00:13:03.983 "trsvcid": "4420" 00:13:03.983 }, 00:13:03.983 "peer_address": { 00:13:03.983 "trtype": "TCP", 00:13:03.983 "adrfam": "IPv4", 00:13:03.983 "traddr": "10.0.0.1", 00:13:03.983 "trsvcid": "49474" 00:13:03.983 }, 00:13:03.983 "auth": { 00:13:03.983 "state": "completed", 00:13:03.983 "digest": "sha384", 00:13:03.983 "dhgroup": "ffdhe6144" 00:13:03.983 } 00:13:03.983 } 00:13:03.983 ]' 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.983 02:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.549 02:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:04.549 02:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:05.112 02:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.112 02:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:05.112 02:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.112 02:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.112 02:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.112 02:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.112 02:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:05.112 02:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:05.382 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:05.382 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.382 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:05.382 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:05.382 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:05.382 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.382 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:13:05.382 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.382 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.382 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.382 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:05.382 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:05.382 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:05.953 00:13:05.953 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:05.953 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.953 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.212 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.212 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.212 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.212 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.212 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.212 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.212 { 00:13:06.212 "cntlid": 87, 00:13:06.212 "qid": 0, 00:13:06.212 "state": "enabled", 00:13:06.212 "thread": "nvmf_tgt_poll_group_000", 00:13:06.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:06.212 "listen_address": { 00:13:06.212 "trtype": "TCP", 00:13:06.212 "adrfam": "IPv4", 00:13:06.212 "traddr": "10.0.0.3", 00:13:06.212 "trsvcid": "4420" 00:13:06.212 }, 00:13:06.212 "peer_address": { 00:13:06.212 "trtype": "TCP", 00:13:06.212 "adrfam": "IPv4", 00:13:06.212 "traddr": "10.0.0.1", 00:13:06.212 "trsvcid": "49506" 00:13:06.212 }, 00:13:06.212 "auth": { 00:13:06.212 "state": "completed", 00:13:06.212 "digest": "sha384", 00:13:06.212 "dhgroup": "ffdhe6144" 00:13:06.212 } 00:13:06.212 } 00:13:06.212 ]' 00:13:06.212 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.212 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:06.212 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.212 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:06.212 02:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.212 02:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.212 02:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.212 02:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.471 02:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:06.471 02:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:07.407 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.407 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:07.407 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.407 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.407 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.407 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:07.407 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.407 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:07.407 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:07.666 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:07.666 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.666 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:07.666 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:07.666 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:07.666 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.666 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.666 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.666 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.666 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.666 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.666 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.666 02:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.239 00:13:08.498 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.498 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.498 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.757 { 00:13:08.757 "cntlid": 89, 00:13:08.757 "qid": 0, 00:13:08.757 "state": "enabled", 00:13:08.757 "thread": "nvmf_tgt_poll_group_000", 00:13:08.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:08.757 "listen_address": { 00:13:08.757 "trtype": "TCP", 00:13:08.757 "adrfam": "IPv4", 00:13:08.757 "traddr": "10.0.0.3", 00:13:08.757 "trsvcid": "4420" 00:13:08.757 }, 00:13:08.757 "peer_address": { 00:13:08.757 "trtype": "TCP", 00:13:08.757 "adrfam": "IPv4", 00:13:08.757 "traddr": "10.0.0.1", 00:13:08.757 "trsvcid": "41006" 00:13:08.757 }, 00:13:08.757 "auth": { 00:13:08.757 "state": "completed", 00:13:08.757 "digest": "sha384", 00:13:08.757 "dhgroup": "ffdhe8192" 00:13:08.757 } 00:13:08.757 } 00:13:08.757 ]' 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.757 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.324 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:09.324 02:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:09.890 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.890 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:09.890 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.890 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.890 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.890 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.890 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:09.890 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:10.149 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:10.149 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.149 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:10.149 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:10.149 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:10.149 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.149 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.149 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.149 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.149 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.149 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.149 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.149 02:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.716 00:13:10.716 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.716 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.716 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.975 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.975 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.975 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.975 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.975 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.975 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.975 { 00:13:10.975 "cntlid": 91, 00:13:10.975 "qid": 0, 00:13:10.975 "state": "enabled", 00:13:10.975 "thread": "nvmf_tgt_poll_group_000", 00:13:10.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:10.975 "listen_address": { 00:13:10.975 "trtype": "TCP", 00:13:10.975 "adrfam": "IPv4", 00:13:10.975 "traddr": "10.0.0.3", 00:13:10.975 "trsvcid": "4420" 00:13:10.975 }, 00:13:10.975 "peer_address": { 00:13:10.975 "trtype": "TCP", 00:13:10.975 "adrfam": "IPv4", 00:13:10.975 "traddr": "10.0.0.1", 00:13:10.975 "trsvcid": "41030" 00:13:10.975 }, 00:13:10.975 "auth": { 00:13:10.975 "state": "completed", 00:13:10.975 "digest": "sha384", 00:13:10.975 "dhgroup": "ffdhe8192" 00:13:10.975 } 00:13:10.975 } 00:13:10.975 ]' 00:13:10.975 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.233 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:11.233 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.233 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:11.233 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.233 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.233 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.233 02:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.491 02:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:11.491 02:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:12.058 02:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.058 02:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:12.058 02:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.058 02:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.058 02:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.058 02:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.058 02:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:12.058 02:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:12.625 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:13:12.625 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.625 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:12.625 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:12.625 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:12.625 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.625 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.625 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.625 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.625 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.625 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.626 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.626 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.193 00:13:13.193 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.193 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.193 02:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.452 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.452 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.452 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.452 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.452 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.452 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.452 { 00:13:13.452 "cntlid": 93, 00:13:13.452 "qid": 0, 00:13:13.452 "state": "enabled", 00:13:13.452 "thread": "nvmf_tgt_poll_group_000", 00:13:13.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:13.453 "listen_address": { 00:13:13.453 "trtype": "TCP", 00:13:13.453 "adrfam": "IPv4", 00:13:13.453 "traddr": "10.0.0.3", 00:13:13.453 "trsvcid": "4420" 00:13:13.453 }, 00:13:13.453 "peer_address": { 00:13:13.453 "trtype": "TCP", 00:13:13.453 "adrfam": "IPv4", 00:13:13.453 "traddr": "10.0.0.1", 00:13:13.453 "trsvcid": "41050" 00:13:13.453 }, 00:13:13.453 "auth": { 00:13:13.453 "state": "completed", 00:13:13.453 "digest": "sha384", 00:13:13.453 "dhgroup": "ffdhe8192" 00:13:13.453 } 00:13:13.453 } 00:13:13.453 ]' 00:13:13.453 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.453 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:13.453 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.453 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:13.453 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.453 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.453 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.453 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.711 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:13.711 02:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:14.278 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.278 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:14.278 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.278 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.278 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.278 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.278 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:14.278 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:14.845 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:13:14.845 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.845 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:14.845 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:14.845 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:14.845 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.845 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:13:14.845 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.845 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.845 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.845 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:14.845 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:14.845 02:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.413 00:13:15.413 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.413 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.413 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.672 { 00:13:15.672 "cntlid": 95, 00:13:15.672 "qid": 0, 00:13:15.672 "state": "enabled", 00:13:15.672 "thread": "nvmf_tgt_poll_group_000", 00:13:15.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:15.672 "listen_address": { 00:13:15.672 "trtype": "TCP", 00:13:15.672 "adrfam": "IPv4", 00:13:15.672 "traddr": "10.0.0.3", 00:13:15.672 "trsvcid": "4420" 00:13:15.672 }, 00:13:15.672 "peer_address": { 00:13:15.672 "trtype": "TCP", 00:13:15.672 "adrfam": "IPv4", 00:13:15.672 "traddr": "10.0.0.1", 00:13:15.672 "trsvcid": "41090" 00:13:15.672 }, 00:13:15.672 "auth": { 00:13:15.672 "state": "completed", 00:13:15.672 "digest": "sha384", 00:13:15.672 "dhgroup": "ffdhe8192" 00:13:15.672 } 00:13:15.672 } 00:13:15.672 ]' 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.672 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.931 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:15.931 02:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:16.868 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.868 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:16.868 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.868 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.868 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.868 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:16.868 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:16.868 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.868 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:16.868 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:17.127 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:13:17.127 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.127 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:17.127 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:17.127 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:17.127 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.127 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.127 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.127 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.127 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.127 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.127 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.127 02:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.385 00:13:17.385 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.385 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.385 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.644 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.644 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.644 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.644 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.644 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.644 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.644 { 00:13:17.644 "cntlid": 97, 00:13:17.644 "qid": 0, 00:13:17.644 "state": "enabled", 00:13:17.644 "thread": "nvmf_tgt_poll_group_000", 00:13:17.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:17.644 "listen_address": { 00:13:17.644 "trtype": "TCP", 00:13:17.644 "adrfam": "IPv4", 00:13:17.644 "traddr": "10.0.0.3", 00:13:17.644 "trsvcid": "4420" 00:13:17.644 }, 00:13:17.644 "peer_address": { 00:13:17.644 "trtype": "TCP", 00:13:17.644 "adrfam": "IPv4", 00:13:17.644 "traddr": "10.0.0.1", 00:13:17.644 "trsvcid": "41104" 00:13:17.644 }, 00:13:17.644 "auth": { 00:13:17.644 "state": "completed", 00:13:17.644 "digest": "sha512", 00:13:17.644 "dhgroup": "null" 00:13:17.644 } 00:13:17.644 } 00:13:17.644 ]' 00:13:17.644 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.644 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.644 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.644 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:17.644 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.902 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.902 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.902 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.173 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:18.173 02:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:18.772 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.772 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:18.772 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.772 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.772 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.772 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.772 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:18.772 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:19.032 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:13:19.032 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.032 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:19.032 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:19.032 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:19.032 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.032 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.032 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.032 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.032 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.032 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.032 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.032 02:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.600 00:13:19.600 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.600 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.600 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.860 { 00:13:19.860 "cntlid": 99, 00:13:19.860 "qid": 0, 00:13:19.860 "state": "enabled", 00:13:19.860 "thread": "nvmf_tgt_poll_group_000", 00:13:19.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:19.860 "listen_address": { 00:13:19.860 "trtype": "TCP", 00:13:19.860 "adrfam": "IPv4", 00:13:19.860 "traddr": "10.0.0.3", 00:13:19.860 "trsvcid": "4420" 00:13:19.860 }, 00:13:19.860 "peer_address": { 00:13:19.860 "trtype": "TCP", 00:13:19.860 "adrfam": "IPv4", 00:13:19.860 "traddr": "10.0.0.1", 00:13:19.860 "trsvcid": "34724" 00:13:19.860 }, 00:13:19.860 "auth": { 00:13:19.860 "state": "completed", 00:13:19.860 "digest": "sha512", 00:13:19.860 "dhgroup": "null" 00:13:19.860 } 00:13:19.860 } 00:13:19.860 ]' 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.860 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.123 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:20.123 02:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:21.059 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.059 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:21.059 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.059 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.059 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.059 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.059 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:21.059 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:21.059 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:13:21.060 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.060 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:21.060 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:21.060 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:21.060 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.060 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.060 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.060 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.060 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.060 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.060 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.060 02:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:21.627 00:13:21.627 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.627 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.627 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.886 { 00:13:21.886 "cntlid": 101, 00:13:21.886 "qid": 0, 00:13:21.886 "state": "enabled", 00:13:21.886 "thread": "nvmf_tgt_poll_group_000", 00:13:21.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:21.886 "listen_address": { 00:13:21.886 "trtype": "TCP", 00:13:21.886 "adrfam": "IPv4", 00:13:21.886 "traddr": "10.0.0.3", 00:13:21.886 "trsvcid": "4420" 00:13:21.886 }, 00:13:21.886 "peer_address": { 00:13:21.886 "trtype": "TCP", 00:13:21.886 "adrfam": "IPv4", 00:13:21.886 "traddr": "10.0.0.1", 00:13:21.886 "trsvcid": "34756" 00:13:21.886 }, 00:13:21.886 "auth": { 00:13:21.886 "state": "completed", 00:13:21.886 "digest": "sha512", 00:13:21.886 "dhgroup": "null" 00:13:21.886 } 00:13:21.886 } 00:13:21.886 ]' 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.886 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.145 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:22.145 02:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:23.081 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.081 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:23.081 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.081 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.081 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.081 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.081 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:23.081 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:23.340 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:23.341 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.341 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:23.341 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:23.341 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:23.341 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.341 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:13:23.341 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.341 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.341 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.341 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:23.341 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:23.341 02:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:23.599 00:13:23.599 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.599 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.599 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.857 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.858 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.858 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.858 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.858 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.858 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.858 { 00:13:23.858 "cntlid": 103, 00:13:23.858 "qid": 0, 00:13:23.858 "state": "enabled", 00:13:23.858 "thread": "nvmf_tgt_poll_group_000", 00:13:23.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:23.858 "listen_address": { 00:13:23.858 "trtype": "TCP", 00:13:23.858 "adrfam": "IPv4", 00:13:23.858 "traddr": "10.0.0.3", 00:13:23.858 "trsvcid": "4420" 00:13:23.858 }, 00:13:23.858 "peer_address": { 00:13:23.858 "trtype": "TCP", 00:13:23.858 "adrfam": "IPv4", 00:13:23.858 "traddr": "10.0.0.1", 00:13:23.858 "trsvcid": "34776" 00:13:23.858 }, 00:13:23.858 "auth": { 00:13:23.858 "state": "completed", 00:13:23.858 "digest": "sha512", 00:13:23.858 "dhgroup": "null" 00:13:23.858 } 00:13:23.858 } 00:13:23.858 ]' 00:13:23.858 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.858 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.858 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.858 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:23.858 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.858 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.858 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.858 02:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.427 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:24.427 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:24.996 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.996 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:24.996 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.996 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.996 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.996 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:24.996 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.996 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:24.996 02:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:25.255 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:25.255 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.255 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:25.255 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:25.255 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:25.255 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.255 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.255 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.255 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.255 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.255 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.255 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.255 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:25.514 00:13:25.773 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.774 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.774 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.033 { 00:13:26.033 "cntlid": 105, 00:13:26.033 "qid": 0, 00:13:26.033 "state": "enabled", 00:13:26.033 "thread": "nvmf_tgt_poll_group_000", 00:13:26.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:26.033 "listen_address": { 00:13:26.033 "trtype": "TCP", 00:13:26.033 "adrfam": "IPv4", 00:13:26.033 "traddr": "10.0.0.3", 00:13:26.033 "trsvcid": "4420" 00:13:26.033 }, 00:13:26.033 "peer_address": { 00:13:26.033 "trtype": "TCP", 00:13:26.033 "adrfam": "IPv4", 00:13:26.033 "traddr": "10.0.0.1", 00:13:26.033 "trsvcid": "34786" 00:13:26.033 }, 00:13:26.033 "auth": { 00:13:26.033 "state": "completed", 00:13:26.033 "digest": "sha512", 00:13:26.033 "dhgroup": "ffdhe2048" 00:13:26.033 } 00:13:26.033 } 00:13:26.033 ]' 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.033 02:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.292 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:26.293 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:26.860 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.119 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:27.119 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.119 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.119 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.119 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.119 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:27.119 02:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:27.379 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:27.379 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.379 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:27.379 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:27.379 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:27.379 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.379 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.379 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.379 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.379 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.379 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.379 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.379 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:27.638 00:13:27.638 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.638 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.638 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.898 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.898 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.898 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.898 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.898 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.898 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.898 { 00:13:27.898 "cntlid": 107, 00:13:27.898 "qid": 0, 00:13:27.898 "state": "enabled", 00:13:27.898 "thread": "nvmf_tgt_poll_group_000", 00:13:27.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:27.898 "listen_address": { 00:13:27.898 "trtype": "TCP", 00:13:27.898 "adrfam": "IPv4", 00:13:27.898 "traddr": "10.0.0.3", 00:13:27.898 "trsvcid": "4420" 00:13:27.898 }, 00:13:27.898 "peer_address": { 00:13:27.898 "trtype": "TCP", 00:13:27.898 "adrfam": "IPv4", 00:13:27.898 "traddr": "10.0.0.1", 00:13:27.898 "trsvcid": "34800" 00:13:27.898 }, 00:13:27.898 "auth": { 00:13:27.898 "state": "completed", 00:13:27.898 "digest": "sha512", 00:13:27.898 "dhgroup": "ffdhe2048" 00:13:27.898 } 00:13:27.898 } 00:13:27.898 ]' 00:13:27.898 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.898 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:27.898 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.898 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:27.898 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.156 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.156 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.156 02:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.414 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:28.414 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:28.982 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.982 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:28.982 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.982 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.982 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.982 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.982 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:28.982 02:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:29.241 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:29.241 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.241 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:29.241 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:29.241 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:29.241 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.241 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.241 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.242 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.242 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.242 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.242 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.242 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:29.500 00:13:29.500 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.500 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.500 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.068 { 00:13:30.068 "cntlid": 109, 00:13:30.068 "qid": 0, 00:13:30.068 "state": "enabled", 00:13:30.068 "thread": "nvmf_tgt_poll_group_000", 00:13:30.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:30.068 "listen_address": { 00:13:30.068 "trtype": "TCP", 00:13:30.068 "adrfam": "IPv4", 00:13:30.068 "traddr": "10.0.0.3", 00:13:30.068 "trsvcid": "4420" 00:13:30.068 }, 00:13:30.068 "peer_address": { 00:13:30.068 "trtype": "TCP", 00:13:30.068 "adrfam": "IPv4", 00:13:30.068 "traddr": "10.0.0.1", 00:13:30.068 "trsvcid": "33792" 00:13:30.068 }, 00:13:30.068 "auth": { 00:13:30.068 "state": "completed", 00:13:30.068 "digest": "sha512", 00:13:30.068 "dhgroup": "ffdhe2048" 00:13:30.068 } 00:13:30.068 } 00:13:30.068 ]' 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.068 02:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.327 02:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:30.328 02:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:30.896 02:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.896 02:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:30.896 02:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.896 02:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.896 02:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.896 02:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.896 02:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:30.896 02:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:31.486 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:31.486 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.486 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:31.486 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:31.486 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:31.486 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.486 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:13:31.486 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.486 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.486 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.486 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:31.486 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.486 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:31.745 00:13:31.745 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.745 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.745 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.004 { 00:13:32.004 "cntlid": 111, 00:13:32.004 "qid": 0, 00:13:32.004 "state": "enabled", 00:13:32.004 "thread": "nvmf_tgt_poll_group_000", 00:13:32.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:32.004 "listen_address": { 00:13:32.004 "trtype": "TCP", 00:13:32.004 "adrfam": "IPv4", 00:13:32.004 "traddr": "10.0.0.3", 00:13:32.004 "trsvcid": "4420" 00:13:32.004 }, 00:13:32.004 "peer_address": { 00:13:32.004 "trtype": "TCP", 00:13:32.004 "adrfam": "IPv4", 00:13:32.004 "traddr": "10.0.0.1", 00:13:32.004 "trsvcid": "33826" 00:13:32.004 }, 00:13:32.004 "auth": { 00:13:32.004 "state": "completed", 00:13:32.004 "digest": "sha512", 00:13:32.004 "dhgroup": "ffdhe2048" 00:13:32.004 } 00:13:32.004 } 00:13:32.004 ]' 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.004 02:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.263 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:32.263 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:32.830 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.830 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:32.830 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.830 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.830 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.089 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:33.089 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.089 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:33.089 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:33.347 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:33.347 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.347 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:33.347 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:33.347 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:33.347 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.347 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.347 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.347 02:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.347 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.347 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.347 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.347 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:33.606 00:13:33.606 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.606 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.606 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.864 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.864 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.865 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.865 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.865 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.865 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.865 { 00:13:33.865 "cntlid": 113, 00:13:33.865 "qid": 0, 00:13:33.865 "state": "enabled", 00:13:33.865 "thread": "nvmf_tgt_poll_group_000", 00:13:33.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:33.865 "listen_address": { 00:13:33.865 "trtype": "TCP", 00:13:33.865 "adrfam": "IPv4", 00:13:33.865 "traddr": "10.0.0.3", 00:13:33.865 "trsvcid": "4420" 00:13:33.865 }, 00:13:33.865 "peer_address": { 00:13:33.865 "trtype": "TCP", 00:13:33.865 "adrfam": "IPv4", 00:13:33.865 "traddr": "10.0.0.1", 00:13:33.865 "trsvcid": "33838" 00:13:33.865 }, 00:13:33.865 "auth": { 00:13:33.865 "state": "completed", 00:13:33.865 "digest": "sha512", 00:13:33.865 "dhgroup": "ffdhe3072" 00:13:33.865 } 00:13:33.865 } 00:13:33.865 ]' 00:13:33.865 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.865 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.865 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.865 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:33.865 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.124 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.124 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.124 02:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.382 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:34.382 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:34.948 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.948 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:34.948 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.948 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.948 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.948 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.948 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:34.948 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:35.206 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:35.206 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.206 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:35.206 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:35.206 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:35.206 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.206 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.206 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.206 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.206 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.206 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.206 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.206 02:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:35.464 00:13:35.464 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.464 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.464 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.723 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.723 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.723 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.723 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.723 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.723 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.723 { 00:13:35.723 "cntlid": 115, 00:13:35.723 "qid": 0, 00:13:35.723 "state": "enabled", 00:13:35.723 "thread": "nvmf_tgt_poll_group_000", 00:13:35.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:35.723 "listen_address": { 00:13:35.723 "trtype": "TCP", 00:13:35.723 "adrfam": "IPv4", 00:13:35.723 "traddr": "10.0.0.3", 00:13:35.723 "trsvcid": "4420" 00:13:35.723 }, 00:13:35.723 "peer_address": { 00:13:35.723 "trtype": "TCP", 00:13:35.723 "adrfam": "IPv4", 00:13:35.723 "traddr": "10.0.0.1", 00:13:35.723 "trsvcid": "33868" 00:13:35.723 }, 00:13:35.723 "auth": { 00:13:35.723 "state": "completed", 00:13:35.723 "digest": "sha512", 00:13:35.723 "dhgroup": "ffdhe3072" 00:13:35.723 } 00:13:35.723 } 00:13:35.723 ]' 00:13:35.723 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.723 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.723 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.982 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:35.982 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.982 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.982 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.982 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.241 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:36.241 02:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:36.809 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.809 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:36.809 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.809 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.809 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.809 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.809 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:36.809 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:37.068 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:37.068 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.068 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:37.068 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:37.068 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:37.068 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.068 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.068 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.068 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.068 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.068 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.068 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.068 02:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:37.327 00:13:37.327 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.327 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.327 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.586 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.586 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.586 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.586 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.850 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.850 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.850 { 00:13:37.850 "cntlid": 117, 00:13:37.850 "qid": 0, 00:13:37.850 "state": "enabled", 00:13:37.850 "thread": "nvmf_tgt_poll_group_000", 00:13:37.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:37.850 "listen_address": { 00:13:37.850 "trtype": "TCP", 00:13:37.850 "adrfam": "IPv4", 00:13:37.850 "traddr": "10.0.0.3", 00:13:37.850 "trsvcid": "4420" 00:13:37.850 }, 00:13:37.850 "peer_address": { 00:13:37.850 "trtype": "TCP", 00:13:37.850 "adrfam": "IPv4", 00:13:37.850 "traddr": "10.0.0.1", 00:13:37.850 "trsvcid": "33904" 00:13:37.850 }, 00:13:37.850 "auth": { 00:13:37.850 "state": "completed", 00:13:37.850 "digest": "sha512", 00:13:37.850 "dhgroup": "ffdhe3072" 00:13:37.850 } 00:13:37.850 } 00:13:37.850 ]' 00:13:37.850 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.850 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.850 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.850 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:37.850 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.850 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.850 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.850 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.111 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:38.111 02:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:38.678 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.678 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:38.678 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.678 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.678 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.678 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.678 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:38.678 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:38.938 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:38.938 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.938 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:38.938 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:38.938 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:38.938 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.938 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:13:38.938 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.938 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.938 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.938 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:38.938 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:38.938 02:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:39.507 00:13:39.507 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.507 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.507 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.767 { 00:13:39.767 "cntlid": 119, 00:13:39.767 "qid": 0, 00:13:39.767 "state": "enabled", 00:13:39.767 "thread": "nvmf_tgt_poll_group_000", 00:13:39.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:39.767 "listen_address": { 00:13:39.767 "trtype": "TCP", 00:13:39.767 "adrfam": "IPv4", 00:13:39.767 "traddr": "10.0.0.3", 00:13:39.767 "trsvcid": "4420" 00:13:39.767 }, 00:13:39.767 "peer_address": { 00:13:39.767 "trtype": "TCP", 00:13:39.767 "adrfam": "IPv4", 00:13:39.767 "traddr": "10.0.0.1", 00:13:39.767 "trsvcid": "47634" 00:13:39.767 }, 00:13:39.767 "auth": { 00:13:39.767 "state": "completed", 00:13:39.767 "digest": "sha512", 00:13:39.767 "dhgroup": "ffdhe3072" 00:13:39.767 } 00:13:39.767 } 00:13:39.767 ]' 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.767 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.334 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:40.334 02:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:40.903 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.903 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:40.903 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.903 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.903 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.903 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:40.903 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.903 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:40.903 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:41.162 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:41.162 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.162 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:41.162 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:41.162 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:41.162 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.162 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.162 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.162 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.162 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.162 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.162 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.162 02:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.421 00:13:41.421 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.421 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.421 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.680 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.680 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.680 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.680 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.680 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.680 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.680 { 00:13:41.680 "cntlid": 121, 00:13:41.680 "qid": 0, 00:13:41.680 "state": "enabled", 00:13:41.680 "thread": "nvmf_tgt_poll_group_000", 00:13:41.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:41.680 "listen_address": { 00:13:41.680 "trtype": "TCP", 00:13:41.680 "adrfam": "IPv4", 00:13:41.680 "traddr": "10.0.0.3", 00:13:41.680 "trsvcid": "4420" 00:13:41.680 }, 00:13:41.680 "peer_address": { 00:13:41.680 "trtype": "TCP", 00:13:41.680 "adrfam": "IPv4", 00:13:41.680 "traddr": "10.0.0.1", 00:13:41.680 "trsvcid": "47662" 00:13:41.680 }, 00:13:41.680 "auth": { 00:13:41.680 "state": "completed", 00:13:41.680 "digest": "sha512", 00:13:41.680 "dhgroup": "ffdhe4096" 00:13:41.680 } 00:13:41.680 } 00:13:41.680 ]' 00:13:41.680 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.940 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.940 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.940 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:41.940 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.940 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.940 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.940 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.198 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:42.199 02:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:42.766 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.766 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:42.766 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.766 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.766 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.766 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.766 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:42.766 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:43.332 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:43.332 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.332 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:43.332 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:43.332 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:43.333 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.333 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.333 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.333 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.333 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.333 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.333 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.333 02:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.590 00:13:43.590 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.590 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.590 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.848 { 00:13:43.848 "cntlid": 123, 00:13:43.848 "qid": 0, 00:13:43.848 "state": "enabled", 00:13:43.848 "thread": "nvmf_tgt_poll_group_000", 00:13:43.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:43.848 "listen_address": { 00:13:43.848 "trtype": "TCP", 00:13:43.848 "adrfam": "IPv4", 00:13:43.848 "traddr": "10.0.0.3", 00:13:43.848 "trsvcid": "4420" 00:13:43.848 }, 00:13:43.848 "peer_address": { 00:13:43.848 "trtype": "TCP", 00:13:43.848 "adrfam": "IPv4", 00:13:43.848 "traddr": "10.0.0.1", 00:13:43.848 "trsvcid": "47684" 00:13:43.848 }, 00:13:43.848 "auth": { 00:13:43.848 "state": "completed", 00:13:43.848 "digest": "sha512", 00:13:43.848 "dhgroup": "ffdhe4096" 00:13:43.848 } 00:13:43.848 } 00:13:43.848 ]' 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.848 02:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.460 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:44.460 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:45.035 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.035 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:45.035 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.035 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.035 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.035 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.035 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:45.035 02:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:45.293 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:45.293 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.293 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:45.293 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:45.293 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:45.293 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.293 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.293 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.293 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.293 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.293 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.293 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.293 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.553 00:13:45.811 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.811 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.811 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.070 { 00:13:46.070 "cntlid": 125, 00:13:46.070 "qid": 0, 00:13:46.070 "state": "enabled", 00:13:46.070 "thread": "nvmf_tgt_poll_group_000", 00:13:46.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:46.070 "listen_address": { 00:13:46.070 "trtype": "TCP", 00:13:46.070 "adrfam": "IPv4", 00:13:46.070 "traddr": "10.0.0.3", 00:13:46.070 "trsvcid": "4420" 00:13:46.070 }, 00:13:46.070 "peer_address": { 00:13:46.070 "trtype": "TCP", 00:13:46.070 "adrfam": "IPv4", 00:13:46.070 "traddr": "10.0.0.1", 00:13:46.070 "trsvcid": "47720" 00:13:46.070 }, 00:13:46.070 "auth": { 00:13:46.070 "state": "completed", 00:13:46.070 "digest": "sha512", 00:13:46.070 "dhgroup": "ffdhe4096" 00:13:46.070 } 00:13:46.070 } 00:13:46.070 ]' 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.070 02:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.328 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:46.328 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:46.894 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.894 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:46.894 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.894 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.894 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.894 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.894 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:46.894 02:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:47.461 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:47.462 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.462 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:47.462 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:47.462 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:47.462 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.462 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:13:47.462 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.462 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.462 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.462 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:47.462 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:47.462 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:47.721 00:13:47.721 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.721 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.721 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.978 { 00:13:47.978 "cntlid": 127, 00:13:47.978 "qid": 0, 00:13:47.978 "state": "enabled", 00:13:47.978 "thread": "nvmf_tgt_poll_group_000", 00:13:47.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:47.978 "listen_address": { 00:13:47.978 "trtype": "TCP", 00:13:47.978 "adrfam": "IPv4", 00:13:47.978 "traddr": "10.0.0.3", 00:13:47.978 "trsvcid": "4420" 00:13:47.978 }, 00:13:47.978 "peer_address": { 00:13:47.978 "trtype": "TCP", 00:13:47.978 "adrfam": "IPv4", 00:13:47.978 "traddr": "10.0.0.1", 00:13:47.978 "trsvcid": "47740" 00:13:47.978 }, 00:13:47.978 "auth": { 00:13:47.978 "state": "completed", 00:13:47.978 "digest": "sha512", 00:13:47.978 "dhgroup": "ffdhe4096" 00:13:47.978 } 00:13:47.978 } 00:13:47.978 ]' 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.978 02:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.546 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:48.546 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:49.113 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.113 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:49.113 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.113 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.113 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.113 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:49.113 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.113 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:49.113 02:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:49.372 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:49.372 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.372 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:49.372 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:49.372 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:49.372 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.372 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.372 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.372 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.372 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.372 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.372 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.372 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.631 00:13:49.631 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.631 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.631 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.889 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.889 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.889 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.889 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.889 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.889 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.889 { 00:13:49.889 "cntlid": 129, 00:13:49.889 "qid": 0, 00:13:49.889 "state": "enabled", 00:13:49.889 "thread": "nvmf_tgt_poll_group_000", 00:13:49.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:49.889 "listen_address": { 00:13:49.889 "trtype": "TCP", 00:13:49.889 "adrfam": "IPv4", 00:13:49.889 "traddr": "10.0.0.3", 00:13:49.889 "trsvcid": "4420" 00:13:49.889 }, 00:13:49.889 "peer_address": { 00:13:49.889 "trtype": "TCP", 00:13:49.889 "adrfam": "IPv4", 00:13:49.889 "traddr": "10.0.0.1", 00:13:49.889 "trsvcid": "46568" 00:13:49.889 }, 00:13:49.889 "auth": { 00:13:49.889 "state": "completed", 00:13:49.889 "digest": "sha512", 00:13:49.889 "dhgroup": "ffdhe6144" 00:13:49.889 } 00:13:49.889 } 00:13:49.889 ]' 00:13:49.889 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.148 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:50.148 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.148 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:50.148 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.148 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.148 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.148 02:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.406 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:50.407 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:50.983 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.983 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:50.983 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.983 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.983 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.983 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.983 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:50.983 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:51.241 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:51.241 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.241 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:51.241 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:51.241 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:51.241 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.241 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.241 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.241 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.241 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.241 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.241 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.241 02:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.808 00:13:51.808 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.808 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.808 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.067 { 00:13:52.067 "cntlid": 131, 00:13:52.067 "qid": 0, 00:13:52.067 "state": "enabled", 00:13:52.067 "thread": "nvmf_tgt_poll_group_000", 00:13:52.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:52.067 "listen_address": { 00:13:52.067 "trtype": "TCP", 00:13:52.067 "adrfam": "IPv4", 00:13:52.067 "traddr": "10.0.0.3", 00:13:52.067 "trsvcid": "4420" 00:13:52.067 }, 00:13:52.067 "peer_address": { 00:13:52.067 "trtype": "TCP", 00:13:52.067 "adrfam": "IPv4", 00:13:52.067 "traddr": "10.0.0.1", 00:13:52.067 "trsvcid": "46598" 00:13:52.067 }, 00:13:52.067 "auth": { 00:13:52.067 "state": "completed", 00:13:52.067 "digest": "sha512", 00:13:52.067 "dhgroup": "ffdhe6144" 00:13:52.067 } 00:13:52.067 } 00:13:52.067 ]' 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.067 02:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.634 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:52.634 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:13:53.201 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.201 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:53.201 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.201 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.201 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.201 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.201 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:53.201 02:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:53.460 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:53.460 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.460 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:53.460 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:53.460 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:53.460 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.460 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.460 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.460 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.460 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.460 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.460 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.460 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.027 00:13:54.027 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.027 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.027 02:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.286 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.287 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.287 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.287 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.287 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.287 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.287 { 00:13:54.287 "cntlid": 133, 00:13:54.287 "qid": 0, 00:13:54.287 "state": "enabled", 00:13:54.287 "thread": "nvmf_tgt_poll_group_000", 00:13:54.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:54.287 "listen_address": { 00:13:54.287 "trtype": "TCP", 00:13:54.287 "adrfam": "IPv4", 00:13:54.287 "traddr": "10.0.0.3", 00:13:54.287 "trsvcid": "4420" 00:13:54.287 }, 00:13:54.287 "peer_address": { 00:13:54.287 "trtype": "TCP", 00:13:54.287 "adrfam": "IPv4", 00:13:54.287 "traddr": "10.0.0.1", 00:13:54.287 "trsvcid": "46628" 00:13:54.287 }, 00:13:54.287 "auth": { 00:13:54.287 "state": "completed", 00:13:54.287 "digest": "sha512", 00:13:54.287 "dhgroup": "ffdhe6144" 00:13:54.287 } 00:13:54.287 } 00:13:54.287 ]' 00:13:54.287 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.287 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:54.287 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.287 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:54.287 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.545 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.545 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.545 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.804 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:54.804 02:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.739 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.998 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.998 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:55.998 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:55.998 02:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.256 00:13:56.256 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.256 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.256 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.823 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.823 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.823 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.823 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.823 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.823 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.823 { 00:13:56.823 "cntlid": 135, 00:13:56.823 "qid": 0, 00:13:56.823 "state": "enabled", 00:13:56.823 "thread": "nvmf_tgt_poll_group_000", 00:13:56.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:56.823 "listen_address": { 00:13:56.823 "trtype": "TCP", 00:13:56.823 "adrfam": "IPv4", 00:13:56.823 "traddr": "10.0.0.3", 00:13:56.823 "trsvcid": "4420" 00:13:56.823 }, 00:13:56.823 "peer_address": { 00:13:56.823 "trtype": "TCP", 00:13:56.823 "adrfam": "IPv4", 00:13:56.823 "traddr": "10.0.0.1", 00:13:56.823 "trsvcid": "46654" 00:13:56.823 }, 00:13:56.823 "auth": { 00:13:56.823 "state": "completed", 00:13:56.823 "digest": "sha512", 00:13:56.823 "dhgroup": "ffdhe6144" 00:13:56.823 } 00:13:56.823 } 00:13:56.823 ]' 00:13:56.823 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.823 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:56.823 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.823 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:56.823 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.823 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.824 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.824 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.082 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:57.082 02:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:13:57.647 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.647 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:13:57.647 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.647 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.647 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.647 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:57.647 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.647 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:57.647 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:58.247 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:58.247 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.247 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:58.247 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:58.247 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:58.247 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.247 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.247 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.247 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.247 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.247 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.247 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.247 02:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.883 00:13:58.883 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.883 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.883 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.141 { 00:13:59.141 "cntlid": 137, 00:13:59.141 "qid": 0, 00:13:59.141 "state": "enabled", 00:13:59.141 "thread": "nvmf_tgt_poll_group_000", 00:13:59.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:13:59.141 "listen_address": { 00:13:59.141 "trtype": "TCP", 00:13:59.141 "adrfam": "IPv4", 00:13:59.141 "traddr": "10.0.0.3", 00:13:59.141 "trsvcid": "4420" 00:13:59.141 }, 00:13:59.141 "peer_address": { 00:13:59.141 "trtype": "TCP", 00:13:59.141 "adrfam": "IPv4", 00:13:59.141 "traddr": "10.0.0.1", 00:13:59.141 "trsvcid": "35928" 00:13:59.141 }, 00:13:59.141 "auth": { 00:13:59.141 "state": "completed", 00:13:59.141 "digest": "sha512", 00:13:59.141 "dhgroup": "ffdhe8192" 00:13:59.141 } 00:13:59.141 } 00:13:59.141 ]' 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.141 02:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.400 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:13:59.400 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:14:00.334 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.334 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:00.334 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.334 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.334 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.334 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.334 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:00.334 02:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:00.593 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:00.593 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.593 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:00.593 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:00.593 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:00.593 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.593 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.593 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.593 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.593 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.593 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.593 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.593 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.160 00:14:01.160 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.160 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.160 02:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.417 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.417 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.417 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.417 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.417 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.417 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.417 { 00:14:01.417 "cntlid": 139, 00:14:01.417 "qid": 0, 00:14:01.417 "state": "enabled", 00:14:01.417 "thread": "nvmf_tgt_poll_group_000", 00:14:01.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:14:01.417 "listen_address": { 00:14:01.417 "trtype": "TCP", 00:14:01.417 "adrfam": "IPv4", 00:14:01.417 "traddr": "10.0.0.3", 00:14:01.417 "trsvcid": "4420" 00:14:01.417 }, 00:14:01.417 "peer_address": { 00:14:01.417 "trtype": "TCP", 00:14:01.417 "adrfam": "IPv4", 00:14:01.417 "traddr": "10.0.0.1", 00:14:01.417 "trsvcid": "35960" 00:14:01.417 }, 00:14:01.417 "auth": { 00:14:01.417 "state": "completed", 00:14:01.417 "digest": "sha512", 00:14:01.417 "dhgroup": "ffdhe8192" 00:14:01.417 } 00:14:01.417 } 00:14:01.417 ]' 00:14:01.417 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.417 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.417 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.417 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:01.417 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.675 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.675 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.675 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.934 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:14:01.934 02:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: --dhchap-ctrl-secret DHHC-1:02:NGRiYjFkNzNiMjM1NGVjZDY0NTVmNDk1MjNhMWM3N2ViNWRiNWZlNTA3ZjJmYjE0HXGk9w==: 00:14:02.501 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.501 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:02.501 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.501 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.501 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.501 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.501 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:02.501 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:02.759 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:02.759 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.759 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:02.759 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:02.759 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:02.759 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.759 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.759 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.759 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.759 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.759 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.759 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.759 02:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.327 00:14:03.327 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.327 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.327 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.893 { 00:14:03.893 "cntlid": 141, 00:14:03.893 "qid": 0, 00:14:03.893 "state": "enabled", 00:14:03.893 "thread": "nvmf_tgt_poll_group_000", 00:14:03.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:14:03.893 "listen_address": { 00:14:03.893 "trtype": "TCP", 00:14:03.893 "adrfam": "IPv4", 00:14:03.893 "traddr": "10.0.0.3", 00:14:03.893 "trsvcid": "4420" 00:14:03.893 }, 00:14:03.893 "peer_address": { 00:14:03.893 "trtype": "TCP", 00:14:03.893 "adrfam": "IPv4", 00:14:03.893 "traddr": "10.0.0.1", 00:14:03.893 "trsvcid": "35988" 00:14:03.893 }, 00:14:03.893 "auth": { 00:14:03.893 "state": "completed", 00:14:03.893 "digest": "sha512", 00:14:03.893 "dhgroup": "ffdhe8192" 00:14:03.893 } 00:14:03.893 } 00:14:03.893 ]' 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.893 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.152 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:14:04.152 02:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:01:Y2NhODU4YTgzOWY0YmFkOWE2NWZjMmI4ZDNiZWI1ZTHOx/87: 00:14:04.719 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.719 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:04.719 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.719 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.719 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.719 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.719 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:04.719 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:05.285 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:05.285 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.285 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:05.285 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:05.285 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:05.285 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.285 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:14:05.285 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.285 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.285 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.285 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:05.285 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:05.285 02:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:05.852 00:14:05.852 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.852 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.852 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.110 { 00:14:06.110 "cntlid": 143, 00:14:06.110 "qid": 0, 00:14:06.110 "state": "enabled", 00:14:06.110 "thread": "nvmf_tgt_poll_group_000", 00:14:06.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:14:06.110 "listen_address": { 00:14:06.110 "trtype": "TCP", 00:14:06.110 "adrfam": "IPv4", 00:14:06.110 "traddr": "10.0.0.3", 00:14:06.110 "trsvcid": "4420" 00:14:06.110 }, 00:14:06.110 "peer_address": { 00:14:06.110 "trtype": "TCP", 00:14:06.110 "adrfam": "IPv4", 00:14:06.110 "traddr": "10.0.0.1", 00:14:06.110 "trsvcid": "36026" 00:14:06.110 }, 00:14:06.110 "auth": { 00:14:06.110 "state": "completed", 00:14:06.110 "digest": "sha512", 00:14:06.110 "dhgroup": "ffdhe8192" 00:14:06.110 } 00:14:06.110 } 00:14:06.110 ]' 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.110 02:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.675 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:14:06.676 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:14:07.243 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.243 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:07.243 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.243 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.243 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.243 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:07.243 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:14:07.243 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:07.243 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:07.243 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:07.243 02:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:07.502 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:14:07.502 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.502 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:07.502 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:07.502 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:07.502 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.502 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.502 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.502 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.502 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.502 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.502 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.502 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:08.068 00:14:08.068 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.068 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.068 02:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.328 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.328 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.328 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.328 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.328 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.328 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.328 { 00:14:08.328 "cntlid": 145, 00:14:08.328 "qid": 0, 00:14:08.328 "state": "enabled", 00:14:08.328 "thread": "nvmf_tgt_poll_group_000", 00:14:08.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:14:08.328 "listen_address": { 00:14:08.328 "trtype": "TCP", 00:14:08.328 "adrfam": "IPv4", 00:14:08.328 "traddr": "10.0.0.3", 00:14:08.328 "trsvcid": "4420" 00:14:08.328 }, 00:14:08.328 "peer_address": { 00:14:08.328 "trtype": "TCP", 00:14:08.328 "adrfam": "IPv4", 00:14:08.328 "traddr": "10.0.0.1", 00:14:08.328 "trsvcid": "36048" 00:14:08.328 }, 00:14:08.328 "auth": { 00:14:08.328 "state": "completed", 00:14:08.328 "digest": "sha512", 00:14:08.328 "dhgroup": "ffdhe8192" 00:14:08.328 } 00:14:08.328 } 00:14:08.328 ]' 00:14:08.328 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.587 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:08.587 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.587 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:08.587 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.587 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.587 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.587 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.846 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:14:08.846 02:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:00:YTJmNDVmOThkNDJjNmE2OTE5NTIwN2MwMTVhMWJlNTNkZTIyM2NkMmRiNzNmYzY3YjECzQ==: --dhchap-ctrl-secret DHHC-1:03:YTU5MjE2MWRmYjNlMDgwZmNiN2NhMDhkNTNmMWUyMjZiOWQyNGM2M2NkMmQxNWQxMDE5ZWViMGFjNWQ3MmViMw2nko0=: 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:09.783 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:10.351 request: 00:14:10.351 { 00:14:10.351 "name": "nvme0", 00:14:10.351 "trtype": "tcp", 00:14:10.351 "traddr": "10.0.0.3", 00:14:10.351 "adrfam": "ipv4", 00:14:10.351 "trsvcid": "4420", 00:14:10.351 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:10.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:14:10.351 "prchk_reftag": false, 00:14:10.351 "prchk_guard": false, 00:14:10.351 "hdgst": false, 00:14:10.351 "ddgst": false, 00:14:10.351 "dhchap_key": "key2", 00:14:10.351 "allow_unrecognized_csi": false, 00:14:10.351 "method": "bdev_nvme_attach_controller", 00:14:10.351 "req_id": 1 00:14:10.351 } 00:14:10.351 Got JSON-RPC error response 00:14:10.351 response: 00:14:10.351 { 00:14:10.351 "code": -5, 00:14:10.351 "message": "Input/output error" 00:14:10.351 } 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:10.351 02:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:10.920 request: 00:14:10.920 { 00:14:10.920 "name": "nvme0", 00:14:10.920 "trtype": "tcp", 00:14:10.920 "traddr": "10.0.0.3", 00:14:10.920 "adrfam": "ipv4", 00:14:10.920 "trsvcid": "4420", 00:14:10.920 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:10.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:14:10.920 "prchk_reftag": false, 00:14:10.920 "prchk_guard": false, 00:14:10.920 "hdgst": false, 00:14:10.920 "ddgst": false, 00:14:10.920 "dhchap_key": "key1", 00:14:10.920 "dhchap_ctrlr_key": "ckey2", 00:14:10.920 "allow_unrecognized_csi": false, 00:14:10.920 "method": "bdev_nvme_attach_controller", 00:14:10.920 "req_id": 1 00:14:10.920 } 00:14:10.920 Got JSON-RPC error response 00:14:10.920 response: 00:14:10.920 { 00:14:10.920 "code": -5, 00:14:10.920 "message": "Input/output error" 00:14:10.920 } 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:10.920 02:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:11.510 request: 00:14:11.510 { 00:14:11.510 "name": "nvme0", 00:14:11.510 "trtype": "tcp", 00:14:11.510 "traddr": "10.0.0.3", 00:14:11.510 "adrfam": "ipv4", 00:14:11.510 "trsvcid": "4420", 00:14:11.510 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:11.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:14:11.510 "prchk_reftag": false, 00:14:11.510 "prchk_guard": false, 00:14:11.510 "hdgst": false, 00:14:11.510 "ddgst": false, 00:14:11.510 "dhchap_key": "key1", 00:14:11.510 "dhchap_ctrlr_key": "ckey1", 00:14:11.510 "allow_unrecognized_csi": false, 00:14:11.510 "method": "bdev_nvme_attach_controller", 00:14:11.510 "req_id": 1 00:14:11.510 } 00:14:11.510 Got JSON-RPC error response 00:14:11.510 response: 00:14:11.510 { 00:14:11.510 "code": -5, 00:14:11.510 "message": "Input/output error" 00:14:11.510 } 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 80038 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 80038 ']' 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 80038 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80038 00:14:11.510 killing process with pid 80038 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80038' 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 80038 00:14:11.510 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 80038 00:14:11.777 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:11.777 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:11.777 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:11.777 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.777 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=83102 00:14:11.777 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:11.777 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 83102 00:14:11.777 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 83102 ']' 00:14:11.777 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.777 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.777 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.777 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.777 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 83102 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 83102 ']' 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.036 02:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.295 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.295 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:12.295 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:14:12.295 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.295 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.295 null0 00:14:12.554 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.554 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:12.554 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Gd9 00:14:12.554 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.554 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.554 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.554 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.nqr ]] 00:14:12.554 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nqr 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.uoF 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.cbl ]] 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cbl 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ucm 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.PeO ]] 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.PeO 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tcs 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:12.555 02:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:13.491 nvme0n1 00:14:13.491 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.491 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.491 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.751 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.751 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.751 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.751 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.751 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.751 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.751 { 00:14:13.751 "cntlid": 1, 00:14:13.751 "qid": 0, 00:14:13.751 "state": "enabled", 00:14:13.751 "thread": "nvmf_tgt_poll_group_000", 00:14:13.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:14:13.751 "listen_address": { 00:14:13.751 "trtype": "TCP", 00:14:13.751 "adrfam": "IPv4", 00:14:13.751 "traddr": "10.0.0.3", 00:14:13.751 "trsvcid": "4420" 00:14:13.751 }, 00:14:13.751 "peer_address": { 00:14:13.751 "trtype": "TCP", 00:14:13.751 "adrfam": "IPv4", 00:14:13.751 "traddr": "10.0.0.1", 00:14:13.751 "trsvcid": "60534" 00:14:13.751 }, 00:14:13.751 "auth": { 00:14:13.751 "state": "completed", 00:14:13.751 "digest": "sha512", 00:14:13.751 "dhgroup": "ffdhe8192" 00:14:13.751 } 00:14:13.751 } 00:14:13.751 ]' 00:14:13.751 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.751 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:13.751 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.751 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:13.751 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.010 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.010 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.010 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.269 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:14:14.269 02:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:14:14.837 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.837 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:14.837 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.837 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.837 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.837 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key3 00:14:14.837 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.837 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.837 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.837 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:14.837 02:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:15.405 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:15.405 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:15.405 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:15.405 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:15.405 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.405 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:15.405 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.405 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:15.405 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.405 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.664 request: 00:14:15.664 { 00:14:15.664 "name": "nvme0", 00:14:15.664 "trtype": "tcp", 00:14:15.664 "traddr": "10.0.0.3", 00:14:15.664 "adrfam": "ipv4", 00:14:15.664 "trsvcid": "4420", 00:14:15.664 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:15.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:14:15.664 "prchk_reftag": false, 00:14:15.664 "prchk_guard": false, 00:14:15.664 "hdgst": false, 00:14:15.664 "ddgst": false, 00:14:15.664 "dhchap_key": "key3", 00:14:15.664 "allow_unrecognized_csi": false, 00:14:15.664 "method": "bdev_nvme_attach_controller", 00:14:15.664 "req_id": 1 00:14:15.664 } 00:14:15.664 Got JSON-RPC error response 00:14:15.664 response: 00:14:15.664 { 00:14:15.664 "code": -5, 00:14:15.664 "message": "Input/output error" 00:14:15.664 } 00:14:15.664 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:15.664 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:15.664 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:15.664 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:15.664 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:14:15.664 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:14:15.664 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:15.664 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:15.923 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:15.923 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:15.923 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:15.923 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:15.923 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.923 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:15.923 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:15.923 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:15.923 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:15.923 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.182 request: 00:14:16.182 { 00:14:16.182 "name": "nvme0", 00:14:16.182 "trtype": "tcp", 00:14:16.182 "traddr": "10.0.0.3", 00:14:16.182 "adrfam": "ipv4", 00:14:16.182 "trsvcid": "4420", 00:14:16.182 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:16.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:14:16.182 "prchk_reftag": false, 00:14:16.182 "prchk_guard": false, 00:14:16.182 "hdgst": false, 00:14:16.182 "ddgst": false, 00:14:16.182 "dhchap_key": "key3", 00:14:16.182 "allow_unrecognized_csi": false, 00:14:16.182 "method": "bdev_nvme_attach_controller", 00:14:16.182 "req_id": 1 00:14:16.182 } 00:14:16.182 Got JSON-RPC error response 00:14:16.182 response: 00:14:16.182 { 00:14:16.182 "code": -5, 00:14:16.182 "message": "Input/output error" 00:14:16.182 } 00:14:16.182 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:16.182 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:16.182 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:16.182 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:16.182 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:16.182 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:14:16.182 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:16.182 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:16.182 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:16.182 02:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:16.445 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:17.013 request: 00:14:17.013 { 00:14:17.013 "name": "nvme0", 00:14:17.013 "trtype": "tcp", 00:14:17.013 "traddr": "10.0.0.3", 00:14:17.013 "adrfam": "ipv4", 00:14:17.013 "trsvcid": "4420", 00:14:17.013 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:17.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:14:17.013 "prchk_reftag": false, 00:14:17.013 "prchk_guard": false, 00:14:17.013 "hdgst": false, 00:14:17.013 "ddgst": false, 00:14:17.013 "dhchap_key": "key0", 00:14:17.013 "dhchap_ctrlr_key": "key1", 00:14:17.013 "allow_unrecognized_csi": false, 00:14:17.013 "method": "bdev_nvme_attach_controller", 00:14:17.013 "req_id": 1 00:14:17.013 } 00:14:17.013 Got JSON-RPC error response 00:14:17.013 response: 00:14:17.013 { 00:14:17.013 "code": -5, 00:14:17.013 "message": "Input/output error" 00:14:17.013 } 00:14:17.013 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:17.013 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:17.013 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:17.013 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:17.013 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:14:17.013 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:17.013 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:17.272 nvme0n1 00:14:17.272 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:14:17.272 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:14:17.272 02:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.531 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.531 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.531 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.789 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 00:14:17.789 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.789 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.789 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.789 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:17.790 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:17.790 02:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:18.725 nvme0n1 00:14:18.725 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:14:18.725 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:14:18.725 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.984 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.984 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:18.984 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.984 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.984 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.984 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:14:18.984 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:14:18.984 02:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.243 02:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.243 02:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:14:19.243 02:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid 29f72880-00cc-41cd-b50e-5c2a72cc9156 -l 0 --dhchap-secret DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: --dhchap-ctrl-secret DHHC-1:03:NjE2ZWM0ZGNmNDVmMmQ0NmM0YjllZDlkMWI5NjNjNzVhYWYyMzQzMGQ1ZTFmMzU5ZDQ0ZTA5N2RhZDE0ZGZjMPXpvOo=: 00:14:20.179 02:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:14:20.179 02:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:14:20.179 02:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:14:20.179 02:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:14:20.179 02:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:14:20.179 02:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:14:20.179 02:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:14:20.179 02:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.179 02:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.437 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:14:20.437 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:20.437 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:14:20.437 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:20.437 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.437 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:20.437 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.437 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:20.437 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:20.437 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:21.373 request: 00:14:21.373 { 00:14:21.373 "name": "nvme0", 00:14:21.373 "trtype": "tcp", 00:14:21.373 "traddr": "10.0.0.3", 00:14:21.373 "adrfam": "ipv4", 00:14:21.373 "trsvcid": "4420", 00:14:21.373 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:21.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156", 00:14:21.373 "prchk_reftag": false, 00:14:21.373 "prchk_guard": false, 00:14:21.373 "hdgst": false, 00:14:21.373 "ddgst": false, 00:14:21.373 "dhchap_key": "key1", 00:14:21.373 "allow_unrecognized_csi": false, 00:14:21.373 "method": "bdev_nvme_attach_controller", 00:14:21.373 "req_id": 1 00:14:21.373 } 00:14:21.373 Got JSON-RPC error response 00:14:21.373 response: 00:14:21.373 { 00:14:21.373 "code": -5, 00:14:21.373 "message": "Input/output error" 00:14:21.373 } 00:14:21.373 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:21.373 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:21.373 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:21.373 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:21.373 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:21.373 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:21.373 02:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:22.309 nvme0n1 00:14:22.309 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:14:22.309 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:14:22.309 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.568 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.568 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.568 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.136 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:23.136 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.136 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.136 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.136 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:14:23.136 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:23.136 02:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:23.395 nvme0n1 00:14:23.395 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:23.395 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:23.395 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.654 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.655 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.655 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.913 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: '' 2s 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: ]] 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDkyNjJhMGIzNzNhNDJjN2ZiNGU5YTFkMGU3OGNjMWX6zvRN: 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:23.914 02:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: 2s 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: ]] 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NWI3NzQxMDlhMzcxMDViYzUzNTc2MGE5MWMxZjJhMDlmNzUxZTZkZWExZTc3ZGMz/Gr34A==: 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:26.487 02:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:28.391 02:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:29.327 nvme0n1 00:14:29.327 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:29.327 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.327 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.327 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.327 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:29.327 02:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:29.895 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:29.895 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.895 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:30.153 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.153 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:30.153 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.153 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.153 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.153 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:30.153 02:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:30.411 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:30.411 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:30.411 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:30.978 02:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:31.545 request: 00:14:31.545 { 00:14:31.545 "name": "nvme0", 00:14:31.545 "dhchap_key": "key1", 00:14:31.545 "dhchap_ctrlr_key": "key3", 00:14:31.545 "method": "bdev_nvme_set_keys", 00:14:31.545 "req_id": 1 00:14:31.545 } 00:14:31.545 Got JSON-RPC error response 00:14:31.545 response: 00:14:31.545 { 00:14:31.545 "code": -13, 00:14:31.545 "message": "Permission denied" 00:14:31.545 } 00:14:31.545 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:31.545 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:31.545 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:31.545 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:31.545 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:31.545 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:31.545 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.802 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:31.803 02:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:32.738 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:32.738 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:32.738 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.307 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:33.307 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:33.307 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.307 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.307 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.307 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:33.307 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:33.307 02:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:34.244 nvme0n1 00:14:34.244 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:34.244 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.244 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.244 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.244 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:34.244 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:34.244 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:34.244 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:34.244 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:34.244 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:34.244 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:34.244 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:34.244 02:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:34.811 request: 00:14:34.811 { 00:14:34.811 "name": "nvme0", 00:14:34.811 "dhchap_key": "key2", 00:14:34.811 "dhchap_ctrlr_key": "key0", 00:14:34.811 "method": "bdev_nvme_set_keys", 00:14:34.811 "req_id": 1 00:14:34.811 } 00:14:34.811 Got JSON-RPC error response 00:14:34.811 response: 00:14:34.811 { 00:14:34.811 "code": -13, 00:14:34.811 "message": "Permission denied" 00:14:34.811 } 00:14:34.811 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:34.811 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:34.811 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:34.811 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:34.811 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:34.811 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.811 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:35.070 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:35.070 02:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:36.447 02:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:36.447 02:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:36.447 02:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 80062 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 80062 ']' 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 80062 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80062 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:36.447 killing process with pid 80062 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80062' 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 80062 00:14:36.447 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 80062 00:14:36.705 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:36.705 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:36.705 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:36.705 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:36.705 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:36.705 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:36.705 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:36.705 rmmod nvme_tcp 00:14:36.705 rmmod nvme_fabrics 00:14:36.705 rmmod nvme_keyring 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 83102 ']' 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 83102 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 83102 ']' 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 83102 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83102 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83102' 00:14:36.964 killing process with pid 83102 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 83102 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 83102 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:36.964 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.222 02:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.222 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:37.222 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Gd9 /tmp/spdk.key-sha256.uoF /tmp/spdk.key-sha384.ucm /tmp/spdk.key-sha512.tcs /tmp/spdk.key-sha512.nqr /tmp/spdk.key-sha384.cbl /tmp/spdk.key-sha256.PeO '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:37.222 00:14:37.222 real 3m9.563s 00:14:37.222 user 7m35.741s 00:14:37.222 sys 0m28.042s 00:14:37.222 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:37.222 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.222 ************************************ 00:14:37.222 END TEST nvmf_auth_target 00:14:37.222 ************************************ 00:14:37.222 02:18:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:37.222 02:18:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:37.222 02:18:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:37.222 02:18:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:37.222 02:18:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.222 ************************************ 00:14:37.222 START TEST nvmf_bdevio_no_huge 00:14:37.222 ************************************ 00:14:37.222 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:37.482 * Looking for test storage... 00:14:37.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:37.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.482 --rc genhtml_branch_coverage=1 00:14:37.482 --rc genhtml_function_coverage=1 00:14:37.482 --rc genhtml_legend=1 00:14:37.482 --rc geninfo_all_blocks=1 00:14:37.482 --rc geninfo_unexecuted_blocks=1 00:14:37.482 00:14:37.482 ' 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:37.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.482 --rc genhtml_branch_coverage=1 00:14:37.482 --rc genhtml_function_coverage=1 00:14:37.482 --rc genhtml_legend=1 00:14:37.482 --rc geninfo_all_blocks=1 00:14:37.482 --rc geninfo_unexecuted_blocks=1 00:14:37.482 00:14:37.482 ' 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:37.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.482 --rc genhtml_branch_coverage=1 00:14:37.482 --rc genhtml_function_coverage=1 00:14:37.482 --rc genhtml_legend=1 00:14:37.482 --rc geninfo_all_blocks=1 00:14:37.482 --rc geninfo_unexecuted_blocks=1 00:14:37.482 00:14:37.482 ' 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:37.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.482 --rc genhtml_branch_coverage=1 00:14:37.482 --rc genhtml_function_coverage=1 00:14:37.482 --rc genhtml_legend=1 00:14:37.482 --rc geninfo_all_blocks=1 00:14:37.482 --rc geninfo_unexecuted_blocks=1 00:14:37.482 00:14:37.482 ' 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:37.482 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:37.482 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:37.483 Cannot find device "nvmf_init_br" 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:37.483 Cannot find device "nvmf_init_br2" 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:37.483 Cannot find device "nvmf_tgt_br" 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.483 Cannot find device "nvmf_tgt_br2" 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:37.483 Cannot find device "nvmf_init_br" 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:37.483 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:37.741 Cannot find device "nvmf_init_br2" 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:37.741 Cannot find device "nvmf_tgt_br" 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:37.741 Cannot find device "nvmf_tgt_br2" 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:37.741 Cannot find device "nvmf_br" 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:37.741 Cannot find device "nvmf_init_if" 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:37.741 Cannot find device "nvmf_init_if2" 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.741 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.741 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:37.741 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:37.742 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:38.000 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.000 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:14:38.000 00:14:38.000 --- 10.0.0.3 ping statistics --- 00:14:38.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.000 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:38.000 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:38.000 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:14:38.000 00:14:38.000 --- 10.0.0.4 ping statistics --- 00:14:38.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.000 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:38.000 00:14:38.000 --- 10.0.0.1 ping statistics --- 00:14:38.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.000 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:38.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:14:38.000 00:14:38.000 --- 10.0.0.2 ping statistics --- 00:14:38.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.000 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=83743 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 83743 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 83743 ']' 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.000 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.001 02:18:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:38.001 [2024-11-08 02:18:39.774124] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:38.001 [2024-11-08 02:18:39.774231] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:38.259 [2024-11-08 02:18:39.921422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.259 [2024-11-08 02:18:40.029763] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.259 [2024-11-08 02:18:40.029840] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.259 [2024-11-08 02:18:40.029853] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.259 [2024-11-08 02:18:40.029863] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.259 [2024-11-08 02:18:40.029873] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.259 [2024-11-08 02:18:40.030069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:14:38.259 [2024-11-08 02:18:40.030810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:14:38.259 [2024-11-08 02:18:40.030899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:14:38.259 [2024-11-08 02:18:40.030908] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.259 [2024-11-08 02:18:40.037354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:39.195 [2024-11-08 02:18:40.807672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:39.195 Malloc0 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:39.195 [2024-11-08 02:18:40.847818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:14:39.195 { 00:14:39.195 "params": { 00:14:39.195 "name": "Nvme$subsystem", 00:14:39.195 "trtype": "$TEST_TRANSPORT", 00:14:39.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:39.195 "adrfam": "ipv4", 00:14:39.195 "trsvcid": "$NVMF_PORT", 00:14:39.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:39.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:39.195 "hdgst": ${hdgst:-false}, 00:14:39.195 "ddgst": ${ddgst:-false} 00:14:39.195 }, 00:14:39.195 "method": "bdev_nvme_attach_controller" 00:14:39.195 } 00:14:39.195 EOF 00:14:39.195 )") 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:14:39.195 02:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:14:39.195 "params": { 00:14:39.195 "name": "Nvme1", 00:14:39.195 "trtype": "tcp", 00:14:39.196 "traddr": "10.0.0.3", 00:14:39.196 "adrfam": "ipv4", 00:14:39.196 "trsvcid": "4420", 00:14:39.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.196 "hdgst": false, 00:14:39.196 "ddgst": false 00:14:39.196 }, 00:14:39.196 "method": "bdev_nvme_attach_controller" 00:14:39.196 }' 00:14:39.196 [2024-11-08 02:18:40.910472] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:39.196 [2024-11-08 02:18:40.910592] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83785 ] 00:14:39.196 [2024-11-08 02:18:41.051837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:39.454 [2024-11-08 02:18:41.162204] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.454 [2024-11-08 02:18:41.162338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.454 [2024-11-08 02:18:41.162344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.454 [2024-11-08 02:18:41.177304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.714 I/O targets: 00:14:39.714 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:39.714 00:14:39.714 00:14:39.714 CUnit - A unit testing framework for C - Version 2.1-3 00:14:39.714 http://cunit.sourceforge.net/ 00:14:39.714 00:14:39.714 00:14:39.714 Suite: bdevio tests on: Nvme1n1 00:14:39.714 Test: blockdev write read block ...passed 00:14:39.714 Test: blockdev write zeroes read block ...passed 00:14:39.714 Test: blockdev write zeroes read no split ...passed 00:14:39.714 Test: blockdev write zeroes read split ...passed 00:14:39.714 Test: blockdev write zeroes read split partial ...passed 00:14:39.714 Test: blockdev reset ...[2024-11-08 02:18:41.400502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:39.714 [2024-11-08 02:18:41.400606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ef6a0 (9): Bad file descriptor 00:14:39.714 [2024-11-08 02:18:41.419523] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:39.714 passed 00:14:39.714 Test: blockdev write read 8 blocks ...passed 00:14:39.714 Test: blockdev write read size > 128k ...passed 00:14:39.714 Test: blockdev write read invalid size ...passed 00:14:39.714 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:39.714 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:39.714 Test: blockdev write read max offset ...passed 00:14:39.714 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:39.714 Test: blockdev writev readv 8 blocks ...passed 00:14:39.714 Test: blockdev writev readv 30 x 1block ...passed 00:14:39.714 Test: blockdev writev readv block ...passed 00:14:39.714 Test: blockdev writev readv size > 128k ...passed 00:14:39.714 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:39.714 Test: blockdev comparev and writev ...[2024-11-08 02:18:41.427924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.714 [2024-11-08 02:18:41.428098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:39.714 [2024-11-08 02:18:41.428215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.714 [2024-11-08 02:18:41.428298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:39.714 [2024-11-08 02:18:41.428756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.714 [2024-11-08 02:18:41.428908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:39.714 [2024-11-08 02:18:41.428991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.714 [2024-11-08 02:18:41.429061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:39.714 [2024-11-08 02:18:41.429508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.714 [2024-11-08 02:18:41.429615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:39.714 [2024-11-08 02:18:41.429693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.714 [2024-11-08 02:18:41.429762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:39.714 [2024-11-08 02:18:41.430249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.714 [2024-11-08 02:18:41.430354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:39.714 [2024-11-08 02:18:41.430433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:39.714 [2024-11-08 02:18:41.430502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:39.714 passed 00:14:39.714 Test: blockdev nvme passthru rw ...passed 00:14:39.714 Test: blockdev nvme passthru vendor specific ...[2024-11-08 02:18:41.431420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.714 [2024-11-08 02:18:41.431533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:39.714 [2024-11-08 02:18:41.431710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.714 [2024-11-08 02:18:41.431813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:39.714 [2024-11-08 02:18:41.431973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.714 [2024-11-08 02:18:41.432075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:39.714 [2024-11-08 02:18:41.432298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:39.714 [2024-11-08 02:18:41.432403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:39.714 passed 00:14:39.714 Test: blockdev nvme admin passthru ...passed 00:14:39.714 Test: blockdev copy ...passed 00:14:39.714 00:14:39.714 Run Summary: Type Total Ran Passed Failed Inactive 00:14:39.714 suites 1 1 n/a 0 0 00:14:39.714 tests 23 23 23 0 0 00:14:39.714 asserts 152 152 152 0 n/a 00:14:39.714 00:14:39.714 Elapsed time = 0.182 seconds 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:39.974 rmmod nvme_tcp 00:14:39.974 rmmod nvme_fabrics 00:14:39.974 rmmod nvme_keyring 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 83743 ']' 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 83743 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 83743 ']' 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 83743 00:14:39.974 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:14:40.263 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.263 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83743 00:14:40.263 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:14:40.263 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:14:40.263 killing process with pid 83743 00:14:40.263 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83743' 00:14:40.263 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 83743 00:14:40.263 02:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 83743 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:40.545 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:40.804 00:14:40.804 real 0m3.432s 00:14:40.804 user 0m10.043s 00:14:40.804 sys 0m1.382s 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:40.804 ************************************ 00:14:40.804 END TEST nvmf_bdevio_no_huge 00:14:40.804 ************************************ 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:40.804 ************************************ 00:14:40.804 START TEST nvmf_tls 00:14:40.804 ************************************ 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:40.804 * Looking for test storage... 00:14:40.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:40.804 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.064 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:41.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.064 --rc genhtml_branch_coverage=1 00:14:41.064 --rc genhtml_function_coverage=1 00:14:41.064 --rc genhtml_legend=1 00:14:41.064 --rc geninfo_all_blocks=1 00:14:41.064 --rc geninfo_unexecuted_blocks=1 00:14:41.064 00:14:41.064 ' 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:41.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.065 --rc genhtml_branch_coverage=1 00:14:41.065 --rc genhtml_function_coverage=1 00:14:41.065 --rc genhtml_legend=1 00:14:41.065 --rc geninfo_all_blocks=1 00:14:41.065 --rc geninfo_unexecuted_blocks=1 00:14:41.065 00:14:41.065 ' 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:41.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.065 --rc genhtml_branch_coverage=1 00:14:41.065 --rc genhtml_function_coverage=1 00:14:41.065 --rc genhtml_legend=1 00:14:41.065 --rc geninfo_all_blocks=1 00:14:41.065 --rc geninfo_unexecuted_blocks=1 00:14:41.065 00:14:41.065 ' 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:41.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.065 --rc genhtml_branch_coverage=1 00:14:41.065 --rc genhtml_function_coverage=1 00:14:41.065 --rc genhtml_legend=1 00:14:41.065 --rc geninfo_all_blocks=1 00:14:41.065 --rc geninfo_unexecuted_blocks=1 00:14:41.065 00:14:41.065 ' 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.065 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:41.065 Cannot find device "nvmf_init_br" 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:41.065 Cannot find device "nvmf_init_br2" 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:41.065 Cannot find device "nvmf_tgt_br" 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:41.065 Cannot find device "nvmf_tgt_br2" 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:41.065 Cannot find device "nvmf_init_br" 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:41.065 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:41.066 Cannot find device "nvmf_init_br2" 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:41.066 Cannot find device "nvmf_tgt_br" 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:41.066 Cannot find device "nvmf_tgt_br2" 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:41.066 Cannot find device "nvmf_br" 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:41.066 Cannot find device "nvmf_init_if" 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:41.066 Cannot find device "nvmf_init_if2" 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:41.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:41.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:41.066 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:41.325 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:41.325 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:41.325 02:18:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:41.325 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:41.325 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:14:41.325 00:14:41.325 --- 10.0.0.3 ping statistics --- 00:14:41.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.325 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:41.325 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:41.325 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:14:41.325 00:14:41.325 --- 10.0.0.4 ping statistics --- 00:14:41.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.325 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:41.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:41.325 00:14:41.325 --- 10.0.0.1 ping statistics --- 00:14:41.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.325 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:41.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:14:41.325 00:14:41.325 --- 10.0.0.2 ping statistics --- 00:14:41.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.325 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:41.325 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:41.584 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:41.584 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:41.584 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:41.584 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.584 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84018 00:14:41.584 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84018 00:14:41.584 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84018 ']' 00:14:41.584 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:41.584 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.584 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.584 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.584 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.584 02:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.584 [2024-11-08 02:18:43.276069] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:41.584 [2024-11-08 02:18:43.276199] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.584 [2024-11-08 02:18:43.418443] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.584 [2024-11-08 02:18:43.461065] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.584 [2024-11-08 02:18:43.461129] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.584 [2024-11-08 02:18:43.461144] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.584 [2024-11-08 02:18:43.461153] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.584 [2024-11-08 02:18:43.461169] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.584 [2024-11-08 02:18:43.461199] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.519 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.519 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:42.519 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:42.519 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:42.519 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.519 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.519 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:42.519 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:42.778 true 00:14:42.778 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:42.778 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:43.036 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:43.036 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:43.036 02:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:43.295 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:43.295 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:43.553 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:43.553 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:43.553 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:44.121 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:44.121 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:44.121 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:44.121 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:44.121 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:44.121 02:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:44.379 02:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:44.379 02:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:44.379 02:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:44.638 02:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:44.638 02:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:44.897 02:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:44.897 02:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:44.897 02:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:45.155 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:45.155 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.72WxUXgdwe 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.H8xE0lKhZP 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.72WxUXgdwe 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.H8xE0lKhZP 00:14:45.723 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:45.982 02:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:46.241 [2024-11-08 02:18:48.042669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.241 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.72WxUXgdwe 00:14:46.241 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.72WxUXgdwe 00:14:46.241 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:46.500 [2024-11-08 02:18:48.306770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.500 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:46.759 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:47.017 [2024-11-08 02:18:48.814899] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:47.017 [2024-11-08 02:18:48.815200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:47.017 02:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:47.275 malloc0 00:14:47.276 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:47.535 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.72WxUXgdwe 00:14:47.794 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:48.053 02:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.72WxUXgdwe 00:15:00.256 Initializing NVMe Controllers 00:15:00.256 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:00.256 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:00.256 Initialization complete. Launching workers. 00:15:00.256 ======================================================== 00:15:00.256 Latency(us) 00:15:00.256 Device Information : IOPS MiB/s Average min max 00:15:00.256 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9763.68 38.14 6556.33 963.35 15845.78 00:15:00.256 ======================================================== 00:15:00.256 Total : 9763.68 38.14 6556.33 963.35 15845.78 00:15:00.256 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.72WxUXgdwe 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.72WxUXgdwe 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84256 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84256 /var/tmp/bdevperf.sock 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84256 ']' 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:00.256 [2024-11-08 02:19:00.139651] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:00.256 [2024-11-08 02:19:00.139804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84256 ] 00:15:00.256 [2024-11-08 02:19:00.281492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.256 [2024-11-08 02:19:00.322932] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.256 [2024-11-08 02:19:00.355936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.72WxUXgdwe 00:15:00.256 02:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:00.256 [2024-11-08 02:19:01.009475] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:00.257 TLSTESTn1 00:15:00.257 02:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:00.257 Running I/O for 10 seconds... 00:15:01.452 3216.00 IOPS, 12.56 MiB/s [2024-11-08T02:19:04.273Z] 3775.00 IOPS, 14.75 MiB/s [2024-11-08T02:19:05.651Z] 3902.33 IOPS, 15.24 MiB/s [2024-11-08T02:19:06.587Z] 3964.75 IOPS, 15.49 MiB/s [2024-11-08T02:19:07.522Z] 4023.40 IOPS, 15.72 MiB/s [2024-11-08T02:19:08.457Z] 4094.83 IOPS, 16.00 MiB/s [2024-11-08T02:19:09.392Z] 4108.14 IOPS, 16.05 MiB/s [2024-11-08T02:19:10.325Z] 4094.12 IOPS, 15.99 MiB/s [2024-11-08T02:19:11.260Z] 4077.78 IOPS, 15.93 MiB/s [2024-11-08T02:19:11.260Z] 4002.80 IOPS, 15.64 MiB/s 00:15:09.376 Latency(us) 00:15:09.376 [2024-11-08T02:19:11.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.376 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:09.376 Verification LBA range: start 0x0 length 0x2000 00:15:09.376 TLSTESTn1 : 10.01 4009.58 15.66 0.00 0.00 31871.97 4706.68 30980.65 00:15:09.376 [2024-11-08T02:19:11.260Z] =================================================================================================================== 00:15:09.376 [2024-11-08T02:19:11.260Z] Total : 4009.58 15.66 0.00 0.00 31871.97 4706.68 30980.65 00:15:09.376 { 00:15:09.376 "results": [ 00:15:09.376 { 00:15:09.376 "job": "TLSTESTn1", 00:15:09.376 "core_mask": "0x4", 00:15:09.376 "workload": "verify", 00:15:09.376 "status": "finished", 00:15:09.376 "verify_range": { 00:15:09.376 "start": 0, 00:15:09.376 "length": 8192 00:15:09.376 }, 00:15:09.376 "queue_depth": 128, 00:15:09.376 "io_size": 4096, 00:15:09.376 "runtime": 10.014265, 00:15:09.376 "iops": 4009.5803336540425, 00:15:09.376 "mibps": 15.662423178336104, 00:15:09.376 "io_failed": 0, 00:15:09.376 "io_timeout": 0, 00:15:09.376 "avg_latency_us": 31871.971773421206, 00:15:09.376 "min_latency_us": 4706.676363636364, 00:15:09.376 "max_latency_us": 30980.654545454545 00:15:09.376 } 00:15:09.376 ], 00:15:09.376 "core_count": 1 00:15:09.376 } 00:15:09.376 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:09.376 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84256 00:15:09.376 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84256 ']' 00:15:09.376 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84256 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84256 00:15:09.635 killing process with pid 84256 00:15:09.635 Received shutdown signal, test time was about 10.000000 seconds 00:15:09.635 00:15:09.635 Latency(us) 00:15:09.635 [2024-11-08T02:19:11.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.635 [2024-11-08T02:19:11.519Z] =================================================================================================================== 00:15:09.635 [2024-11-08T02:19:11.519Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84256' 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84256 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84256 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H8xE0lKhZP 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H8xE0lKhZP 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.635 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H8xE0lKhZP 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.H8xE0lKhZP 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84383 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84383 /var/tmp/bdevperf.sock 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84383 ']' 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.636 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.895 [2024-11-08 02:19:11.524799] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:09.895 [2024-11-08 02:19:11.525712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84383 ] 00:15:09.895 [2024-11-08 02:19:11.667611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.895 [2024-11-08 02:19:11.710594] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.895 [2024-11-08 02:19:11.743765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:10.154 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.154 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:10.154 02:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H8xE0lKhZP 00:15:10.412 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:10.672 [2024-11-08 02:19:12.409480] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:10.672 [2024-11-08 02:19:12.416136] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:10.672 [2024-11-08 02:19:12.416169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2e550 (107): Transport endpoint is not connected 00:15:10.672 [2024-11-08 02:19:12.417160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2e550 (9): Bad file descriptor 00:15:10.672 [2024-11-08 02:19:12.418156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:10.672 [2024-11-08 02:19:12.418219] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:10.672 [2024-11-08 02:19:12.418230] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:10.672 [2024-11-08 02:19:12.418244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:10.672 request: 00:15:10.672 { 00:15:10.672 "name": "TLSTEST", 00:15:10.672 "trtype": "tcp", 00:15:10.672 "traddr": "10.0.0.3", 00:15:10.672 "adrfam": "ipv4", 00:15:10.672 "trsvcid": "4420", 00:15:10.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.672 "prchk_reftag": false, 00:15:10.672 "prchk_guard": false, 00:15:10.672 "hdgst": false, 00:15:10.672 "ddgst": false, 00:15:10.672 "psk": "key0", 00:15:10.672 "allow_unrecognized_csi": false, 00:15:10.672 "method": "bdev_nvme_attach_controller", 00:15:10.672 "req_id": 1 00:15:10.672 } 00:15:10.672 Got JSON-RPC error response 00:15:10.672 response: 00:15:10.672 { 00:15:10.672 "code": -5, 00:15:10.672 "message": "Input/output error" 00:15:10.672 } 00:15:10.672 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84383 00:15:10.672 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84383 ']' 00:15:10.672 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84383 00:15:10.672 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:10.672 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:10.672 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84383 00:15:10.672 killing process with pid 84383 00:15:10.672 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.672 00:15:10.672 Latency(us) 00:15:10.672 [2024-11-08T02:19:12.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.672 [2024-11-08T02:19:12.556Z] =================================================================================================================== 00:15:10.672 [2024-11-08T02:19:12.556Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:10.672 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:10.672 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:10.672 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84383' 00:15:10.672 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84383 00:15:10.672 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84383 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.72WxUXgdwe 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.72WxUXgdwe 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.72WxUXgdwe 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.72WxUXgdwe 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84410 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84410 /var/tmp/bdevperf.sock 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84410 ']' 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:10.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:10.932 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.932 [2024-11-08 02:19:12.681401] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:10.932 [2024-11-08 02:19:12.681518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84410 ] 00:15:11.191 [2024-11-08 02:19:12.818609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.191 [2024-11-08 02:19:12.853886] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.191 [2024-11-08 02:19:12.882159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:11.191 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:11.191 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:11.191 02:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.72WxUXgdwe 00:15:11.450 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:15:11.709 [2024-11-08 02:19:13.469454] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:11.709 [2024-11-08 02:19:13.476253] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:11.709 [2024-11-08 02:19:13.476294] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:11.709 [2024-11-08 02:19:13.476345] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:11.709 [2024-11-08 02:19:13.476381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1550 (107): Transport endpoint is not connected 00:15:11.709 [2024-11-08 02:19:13.477369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1550 (9): Bad file descriptor 00:15:11.709 [2024-11-08 02:19:13.478366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:11.709 [2024-11-08 02:19:13.478400] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:11.709 [2024-11-08 02:19:13.478411] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:11.709 [2024-11-08 02:19:13.478441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:11.709 request: 00:15:11.709 { 00:15:11.709 "name": "TLSTEST", 00:15:11.709 "trtype": "tcp", 00:15:11.709 "traddr": "10.0.0.3", 00:15:11.709 "adrfam": "ipv4", 00:15:11.709 "trsvcid": "4420", 00:15:11.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.709 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:11.709 "prchk_reftag": false, 00:15:11.709 "prchk_guard": false, 00:15:11.709 "hdgst": false, 00:15:11.709 "ddgst": false, 00:15:11.709 "psk": "key0", 00:15:11.709 "allow_unrecognized_csi": false, 00:15:11.709 "method": "bdev_nvme_attach_controller", 00:15:11.709 "req_id": 1 00:15:11.709 } 00:15:11.709 Got JSON-RPC error response 00:15:11.709 response: 00:15:11.709 { 00:15:11.709 "code": -5, 00:15:11.709 "message": "Input/output error" 00:15:11.709 } 00:15:11.709 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84410 00:15:11.709 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84410 ']' 00:15:11.709 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84410 00:15:11.709 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:11.709 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:11.709 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84410 00:15:11.709 killing process with pid 84410 00:15:11.709 Received shutdown signal, test time was about 10.000000 seconds 00:15:11.709 00:15:11.709 Latency(us) 00:15:11.709 [2024-11-08T02:19:13.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.709 [2024-11-08T02:19:13.593Z] =================================================================================================================== 00:15:11.709 [2024-11-08T02:19:13.593Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:11.709 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:11.709 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:11.709 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84410' 00:15:11.709 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84410 00:15:11.709 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84410 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.72WxUXgdwe 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.72WxUXgdwe 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.72WxUXgdwe 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.72WxUXgdwe 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84431 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84431 /var/tmp/bdevperf.sock 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84431 ']' 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:11.968 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.968 [2024-11-08 02:19:13.735273] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:11.968 [2024-11-08 02:19:13.735388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84431 ] 00:15:12.227 [2024-11-08 02:19:13.867798] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.227 [2024-11-08 02:19:13.904098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.227 [2024-11-08 02:19:13.933420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:12.227 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.227 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:12.227 02:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.72WxUXgdwe 00:15:12.486 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:12.769 [2024-11-08 02:19:14.545222] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:12.769 [2024-11-08 02:19:14.556737] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:12.769 [2024-11-08 02:19:14.556811] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:12.769 [2024-11-08 02:19:14.556908] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:12.769 [2024-11-08 02:19:14.557899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb8550 (107): Transport endpoint is not connected 00:15:12.769 [2024-11-08 02:19:14.558869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb8550 (9): Bad file descriptor 00:15:12.769 [2024-11-08 02:19:14.559864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:12.769 [2024-11-08 02:19:14.559893] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:12.769 [2024-11-08 02:19:14.559905] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:15:12.769 [2024-11-08 02:19:14.559921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:12.769 request: 00:15:12.769 { 00:15:12.769 "name": "TLSTEST", 00:15:12.769 "trtype": "tcp", 00:15:12.769 "traddr": "10.0.0.3", 00:15:12.769 "adrfam": "ipv4", 00:15:12.769 "trsvcid": "4420", 00:15:12.769 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:12.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:12.769 "prchk_reftag": false, 00:15:12.769 "prchk_guard": false, 00:15:12.769 "hdgst": false, 00:15:12.769 "ddgst": false, 00:15:12.769 "psk": "key0", 00:15:12.769 "allow_unrecognized_csi": false, 00:15:12.769 "method": "bdev_nvme_attach_controller", 00:15:12.769 "req_id": 1 00:15:12.769 } 00:15:12.769 Got JSON-RPC error response 00:15:12.769 response: 00:15:12.769 { 00:15:12.769 "code": -5, 00:15:12.769 "message": "Input/output error" 00:15:12.769 } 00:15:12.769 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84431 00:15:12.770 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84431 ']' 00:15:12.770 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84431 00:15:12.770 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:12.770 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.770 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84431 00:15:12.770 killing process with pid 84431 00:15:12.770 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.770 00:15:12.770 Latency(us) 00:15:12.770 [2024-11-08T02:19:14.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.770 [2024-11-08T02:19:14.654Z] =================================================================================================================== 00:15:12.770 [2024-11-08T02:19:14.654Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:12.770 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:12.770 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:12.770 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84431' 00:15:12.770 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84431 00:15:12.770 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84431 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84452 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84452 /var/tmp/bdevperf.sock 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84452 ']' 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.029 02:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.029 [2024-11-08 02:19:14.844490] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:13.029 [2024-11-08 02:19:14.844845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84452 ] 00:15:13.288 [2024-11-08 02:19:14.989145] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.288 [2024-11-08 02:19:15.038917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.288 [2024-11-08 02:19:15.073891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:13.288 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.288 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:13.288 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:15:13.854 [2024-11-08 02:19:15.487734] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:15:13.854 [2024-11-08 02:19:15.487812] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:13.854 request: 00:15:13.854 { 00:15:13.854 "name": "key0", 00:15:13.854 "path": "", 00:15:13.854 "method": "keyring_file_add_key", 00:15:13.854 "req_id": 1 00:15:13.854 } 00:15:13.854 Got JSON-RPC error response 00:15:13.854 response: 00:15:13.854 { 00:15:13.854 "code": -1, 00:15:13.854 "message": "Operation not permitted" 00:15:13.854 } 00:15:13.854 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:14.113 [2024-11-08 02:19:15.879934] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:14.113 [2024-11-08 02:19:15.880064] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:14.113 request: 00:15:14.113 { 00:15:14.113 "name": "TLSTEST", 00:15:14.113 "trtype": "tcp", 00:15:14.113 "traddr": "10.0.0.3", 00:15:14.113 "adrfam": "ipv4", 00:15:14.113 "trsvcid": "4420", 00:15:14.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.113 "prchk_reftag": false, 00:15:14.113 "prchk_guard": false, 00:15:14.113 "hdgst": false, 00:15:14.113 "ddgst": false, 00:15:14.113 "psk": "key0", 00:15:14.113 "allow_unrecognized_csi": false, 00:15:14.113 "method": "bdev_nvme_attach_controller", 00:15:14.113 "req_id": 1 00:15:14.113 } 00:15:14.113 Got JSON-RPC error response 00:15:14.113 response: 00:15:14.113 { 00:15:14.113 "code": -126, 00:15:14.113 "message": "Required key not available" 00:15:14.113 } 00:15:14.113 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84452 00:15:14.113 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84452 ']' 00:15:14.113 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84452 00:15:14.113 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:14.113 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.113 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84452 00:15:14.113 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:14.113 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:14.113 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84452' 00:15:14.113 killing process with pid 84452 00:15:14.113 Received shutdown signal, test time was about 10.000000 seconds 00:15:14.113 00:15:14.113 Latency(us) 00:15:14.113 [2024-11-08T02:19:15.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.113 [2024-11-08T02:19:15.997Z] =================================================================================================================== 00:15:14.113 [2024-11-08T02:19:15.997Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:14.113 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84452 00:15:14.113 02:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84452 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 84018 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84018 ']' 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84018 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84018 00:15:14.371 killing process with pid 84018 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84018' 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84018 00:15:14.371 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84018 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.hiYTAxHGEA 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.hiYTAxHGEA 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84488 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84488 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84488 ']' 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.630 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.630 [2024-11-08 02:19:16.430648] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:14.630 [2024-11-08 02:19:16.431009] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.888 [2024-11-08 02:19:16.569074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.888 [2024-11-08 02:19:16.610289] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.888 [2024-11-08 02:19:16.610348] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.888 [2024-11-08 02:19:16.610360] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.888 [2024-11-08 02:19:16.610368] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.888 [2024-11-08 02:19:16.610375] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.888 [2024-11-08 02:19:16.610404] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.888 [2024-11-08 02:19:16.640503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:14.888 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.888 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:14.888 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:14.888 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:14.888 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.888 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.888 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.hiYTAxHGEA 00:15:14.888 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hiYTAxHGEA 00:15:14.888 02:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:15.452 [2024-11-08 02:19:17.077757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.452 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:15.711 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:15.969 [2024-11-08 02:19:17.713927] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:15.969 [2024-11-08 02:19:17.714172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:15.969 02:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:16.226 malloc0 00:15:16.226 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:16.484 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hiYTAxHGEA 00:15:16.742 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hiYTAxHGEA 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hiYTAxHGEA 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84542 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84542 /var/tmp/bdevperf.sock 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84542 ']' 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:17.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.000 02:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:17.000 [2024-11-08 02:19:18.805411] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:17.000 [2024-11-08 02:19:18.805694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84542 ] 00:15:17.259 [2024-11-08 02:19:18.945376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.259 [2024-11-08 02:19:18.986017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.259 [2024-11-08 02:19:19.019015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:17.259 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.259 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:17.259 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hiYTAxHGEA 00:15:17.517 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:17.778 [2024-11-08 02:19:19.513000] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:17.778 TLSTESTn1 00:15:17.778 02:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:18.038 Running I/O for 10 seconds... 00:15:19.906 4352.00 IOPS, 17.00 MiB/s [2024-11-08T02:19:22.725Z] 4400.00 IOPS, 17.19 MiB/s [2024-11-08T02:19:24.101Z] 4391.00 IOPS, 17.15 MiB/s [2024-11-08T02:19:25.035Z] 4388.75 IOPS, 17.14 MiB/s [2024-11-08T02:19:25.969Z] 4387.80 IOPS, 17.14 MiB/s [2024-11-08T02:19:26.905Z] 4386.33 IOPS, 17.13 MiB/s [2024-11-08T02:19:27.841Z] 4387.29 IOPS, 17.14 MiB/s [2024-11-08T02:19:28.775Z] 4388.62 IOPS, 17.14 MiB/s [2024-11-08T02:19:30.150Z] 4366.89 IOPS, 17.06 MiB/s [2024-11-08T02:19:30.150Z] 4336.30 IOPS, 16.94 MiB/s 00:15:28.266 Latency(us) 00:15:28.266 [2024-11-08T02:19:30.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.266 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:28.266 Verification LBA range: start 0x0 length 0x2000 00:15:28.266 TLSTESTn1 : 10.02 4341.57 16.96 0.00 0.00 29428.22 6136.55 23592.96 00:15:28.266 [2024-11-08T02:19:30.150Z] =================================================================================================================== 00:15:28.266 [2024-11-08T02:19:30.150Z] Total : 4341.57 16.96 0.00 0.00 29428.22 6136.55 23592.96 00:15:28.266 { 00:15:28.266 "results": [ 00:15:28.266 { 00:15:28.266 "job": "TLSTESTn1", 00:15:28.266 "core_mask": "0x4", 00:15:28.266 "workload": "verify", 00:15:28.266 "status": "finished", 00:15:28.266 "verify_range": { 00:15:28.266 "start": 0, 00:15:28.266 "length": 8192 00:15:28.266 }, 00:15:28.266 "queue_depth": 128, 00:15:28.266 "io_size": 4096, 00:15:28.266 "runtime": 10.017104, 00:15:28.266 "iops": 4341.574171537003, 00:15:28.266 "mibps": 16.95927410756642, 00:15:28.266 "io_failed": 0, 00:15:28.266 "io_timeout": 0, 00:15:28.266 "avg_latency_us": 29428.21788883547, 00:15:28.266 "min_latency_us": 6136.552727272728, 00:15:28.266 "max_latency_us": 23592.96 00:15:28.266 } 00:15:28.266 ], 00:15:28.266 "core_count": 1 00:15:28.266 } 00:15:28.266 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:28.266 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84542 00:15:28.266 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84542 ']' 00:15:28.266 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84542 00:15:28.266 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:28.266 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.266 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84542 00:15:28.266 killing process with pid 84542 00:15:28.267 Received shutdown signal, test time was about 10.000000 seconds 00:15:28.267 00:15:28.267 Latency(us) 00:15:28.267 [2024-11-08T02:19:30.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.267 [2024-11-08T02:19:30.151Z] =================================================================================================================== 00:15:28.267 [2024-11-08T02:19:30.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84542' 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84542 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84542 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.hiYTAxHGEA 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hiYTAxHGEA 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hiYTAxHGEA 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hiYTAxHGEA 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hiYTAxHGEA 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84671 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84671 /var/tmp/bdevperf.sock 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84671 ']' 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:28.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:28.267 02:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.267 [2024-11-08 02:19:29.983543] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:28.267 [2024-11-08 02:19:29.983642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84671 ] 00:15:28.267 [2024-11-08 02:19:30.126777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.525 [2024-11-08 02:19:30.168366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.525 [2024-11-08 02:19:30.201581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.525 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.525 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:28.525 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hiYTAxHGEA 00:15:28.784 [2024-11-08 02:19:30.547185] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.hiYTAxHGEA': 0100666 00:15:28.784 [2024-11-08 02:19:30.547507] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:28.784 request: 00:15:28.784 { 00:15:28.784 "name": "key0", 00:15:28.784 "path": "/tmp/tmp.hiYTAxHGEA", 00:15:28.784 "method": "keyring_file_add_key", 00:15:28.784 "req_id": 1 00:15:28.784 } 00:15:28.784 Got JSON-RPC error response 00:15:28.784 response: 00:15:28.784 { 00:15:28.784 "code": -1, 00:15:28.784 "message": "Operation not permitted" 00:15:28.784 } 00:15:28.784 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:29.043 [2024-11-08 02:19:30.815340] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:29.043 [2024-11-08 02:19:30.815430] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:29.043 request: 00:15:29.043 { 00:15:29.043 "name": "TLSTEST", 00:15:29.043 "trtype": "tcp", 00:15:29.043 "traddr": "10.0.0.3", 00:15:29.043 "adrfam": "ipv4", 00:15:29.043 "trsvcid": "4420", 00:15:29.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.043 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:29.043 "prchk_reftag": false, 00:15:29.043 "prchk_guard": false, 00:15:29.043 "hdgst": false, 00:15:29.043 "ddgst": false, 00:15:29.043 "psk": "key0", 00:15:29.043 "allow_unrecognized_csi": false, 00:15:29.043 "method": "bdev_nvme_attach_controller", 00:15:29.043 "req_id": 1 00:15:29.043 } 00:15:29.043 Got JSON-RPC error response 00:15:29.043 response: 00:15:29.043 { 00:15:29.043 "code": -126, 00:15:29.043 "message": "Required key not available" 00:15:29.043 } 00:15:29.043 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84671 00:15:29.043 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84671 ']' 00:15:29.043 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84671 00:15:29.043 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:29.043 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.043 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84671 00:15:29.043 killing process with pid 84671 00:15:29.043 Received shutdown signal, test time was about 10.000000 seconds 00:15:29.043 00:15:29.043 Latency(us) 00:15:29.043 [2024-11-08T02:19:30.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.043 [2024-11-08T02:19:30.927Z] =================================================================================================================== 00:15:29.043 [2024-11-08T02:19:30.927Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:29.043 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:29.043 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:29.043 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84671' 00:15:29.043 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84671 00:15:29.043 02:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84671 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 84488 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84488 ']' 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84488 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84488 00:15:29.302 killing process with pid 84488 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84488' 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84488 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84488 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.302 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.584 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84697 00:15:29.584 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:29.584 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84697 00:15:29.584 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84697 ']' 00:15:29.584 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.584 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.584 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.584 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.584 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.584 [2024-11-08 02:19:31.243037] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:29.584 [2024-11-08 02:19:31.243546] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.584 [2024-11-08 02:19:31.379553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.584 [2024-11-08 02:19:31.419202] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.584 [2024-11-08 02:19:31.419264] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.584 [2024-11-08 02:19:31.419279] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.584 [2024-11-08 02:19:31.419289] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.584 [2024-11-08 02:19:31.419298] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.584 [2024-11-08 02:19:31.419334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.849 [2024-11-08 02:19:31.452045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.hiYTAxHGEA 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.hiYTAxHGEA 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.hiYTAxHGEA 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hiYTAxHGEA 00:15:29.849 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:30.108 [2024-11-08 02:19:31.785984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:30.108 02:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:30.367 02:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:30.626 [2024-11-08 02:19:32.346127] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:30.626 [2024-11-08 02:19:32.346523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:30.626 02:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:30.885 malloc0 00:15:30.885 02:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:31.143 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hiYTAxHGEA 00:15:31.710 [2024-11-08 02:19:33.295016] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.hiYTAxHGEA': 0100666 00:15:31.710 [2024-11-08 02:19:33.295064] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:31.710 request: 00:15:31.710 { 00:15:31.710 "name": "key0", 00:15:31.710 "path": "/tmp/tmp.hiYTAxHGEA", 00:15:31.710 "method": "keyring_file_add_key", 00:15:31.710 "req_id": 1 00:15:31.710 } 00:15:31.710 Got JSON-RPC error response 00:15:31.710 response: 00:15:31.710 { 00:15:31.710 "code": -1, 00:15:31.710 "message": "Operation not permitted" 00:15:31.710 } 00:15:31.710 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:31.969 [2024-11-08 02:19:33.603134] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:31.969 [2024-11-08 02:19:33.603217] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:31.969 request: 00:15:31.969 { 00:15:31.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.969 "host": "nqn.2016-06.io.spdk:host1", 00:15:31.969 "psk": "key0", 00:15:31.969 "method": "nvmf_subsystem_add_host", 00:15:31.969 "req_id": 1 00:15:31.969 } 00:15:31.969 Got JSON-RPC error response 00:15:31.969 response: 00:15:31.969 { 00:15:31.969 "code": -32603, 00:15:31.969 "message": "Internal error" 00:15:31.969 } 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 84697 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84697 ']' 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84697 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84697 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:31.969 killing process with pid 84697 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84697' 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84697 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84697 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.hiYTAxHGEA 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84753 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84753 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84753 ']' 00:15:31.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:31.969 02:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.227 [2024-11-08 02:19:33.878802] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:32.227 [2024-11-08 02:19:33.879143] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.227 [2024-11-08 02:19:34.016600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.227 [2024-11-08 02:19:34.050957] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.227 [2024-11-08 02:19:34.051012] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.227 [2024-11-08 02:19:34.051024] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.228 [2024-11-08 02:19:34.051032] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.228 [2024-11-08 02:19:34.051040] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.228 [2024-11-08 02:19:34.051067] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.228 [2024-11-08 02:19:34.079577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:32.486 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:32.486 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:32.486 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:32.486 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:32.486 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.486 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.486 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.hiYTAxHGEA 00:15:32.486 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hiYTAxHGEA 00:15:32.486 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:32.745 [2024-11-08 02:19:34.463291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.745 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:33.003 02:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:33.262 [2024-11-08 02:19:35.091794] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:33.262 [2024-11-08 02:19:35.092011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:33.262 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:33.520 malloc0 00:15:33.521 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:34.087 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hiYTAxHGEA 00:15:34.345 02:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:34.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:34.604 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=84807 00:15:34.605 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:34.605 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:34.605 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 84807 /var/tmp/bdevperf.sock 00:15:34.605 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84807 ']' 00:15:34.605 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:34.605 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.605 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:34.605 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.605 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.605 [2024-11-08 02:19:36.340215] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:34.605 [2024-11-08 02:19:36.340500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84807 ] 00:15:34.605 [2024-11-08 02:19:36.477574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.863 [2024-11-08 02:19:36.518419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.863 [2024-11-08 02:19:36.550691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.863 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.863 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:34.863 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hiYTAxHGEA 00:15:35.122 02:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:35.381 [2024-11-08 02:19:37.140263] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:35.381 TLSTESTn1 00:15:35.381 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:35.949 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:35.949 "subsystems": [ 00:15:35.949 { 00:15:35.949 "subsystem": "keyring", 00:15:35.949 "config": [ 00:15:35.949 { 00:15:35.949 "method": "keyring_file_add_key", 00:15:35.949 "params": { 00:15:35.949 "name": "key0", 00:15:35.949 "path": "/tmp/tmp.hiYTAxHGEA" 00:15:35.949 } 00:15:35.949 } 00:15:35.949 ] 00:15:35.949 }, 00:15:35.949 { 00:15:35.949 "subsystem": "iobuf", 00:15:35.949 "config": [ 00:15:35.949 { 00:15:35.949 "method": "iobuf_set_options", 00:15:35.949 "params": { 00:15:35.949 "small_pool_count": 8192, 00:15:35.949 "large_pool_count": 1024, 00:15:35.949 "small_bufsize": 8192, 00:15:35.949 "large_bufsize": 135168 00:15:35.949 } 00:15:35.949 } 00:15:35.949 ] 00:15:35.949 }, 00:15:35.949 { 00:15:35.949 "subsystem": "sock", 00:15:35.949 "config": [ 00:15:35.949 { 00:15:35.949 "method": "sock_set_default_impl", 00:15:35.949 "params": { 00:15:35.949 "impl_name": "uring" 00:15:35.949 } 00:15:35.949 }, 00:15:35.949 { 00:15:35.949 "method": "sock_impl_set_options", 00:15:35.949 "params": { 00:15:35.949 "impl_name": "ssl", 00:15:35.949 "recv_buf_size": 4096, 00:15:35.949 "send_buf_size": 4096, 00:15:35.949 "enable_recv_pipe": true, 00:15:35.949 "enable_quickack": false, 00:15:35.949 "enable_placement_id": 0, 00:15:35.949 "enable_zerocopy_send_server": true, 00:15:35.949 "enable_zerocopy_send_client": false, 00:15:35.949 "zerocopy_threshold": 0, 00:15:35.949 "tls_version": 0, 00:15:35.949 "enable_ktls": false 00:15:35.949 } 00:15:35.949 }, 00:15:35.949 { 00:15:35.949 "method": "sock_impl_set_options", 00:15:35.949 "params": { 00:15:35.949 "impl_name": "posix", 00:15:35.949 "recv_buf_size": 2097152, 00:15:35.949 "send_buf_size": 2097152, 00:15:35.949 "enable_recv_pipe": true, 00:15:35.949 "enable_quickack": false, 00:15:35.949 "enable_placement_id": 0, 00:15:35.949 "enable_zerocopy_send_server": true, 00:15:35.949 "enable_zerocopy_send_client": false, 00:15:35.949 "zerocopy_threshold": 0, 00:15:35.949 "tls_version": 0, 00:15:35.949 "enable_ktls": false 00:15:35.949 } 00:15:35.949 }, 00:15:35.949 { 00:15:35.949 "method": "sock_impl_set_options", 00:15:35.949 "params": { 00:15:35.949 "impl_name": "uring", 00:15:35.949 "recv_buf_size": 2097152, 00:15:35.949 "send_buf_size": 2097152, 00:15:35.949 "enable_recv_pipe": true, 00:15:35.949 "enable_quickack": false, 00:15:35.949 "enable_placement_id": 0, 00:15:35.949 "enable_zerocopy_send_server": false, 00:15:35.949 "enable_zerocopy_send_client": false, 00:15:35.949 "zerocopy_threshold": 0, 00:15:35.949 "tls_version": 0, 00:15:35.949 "enable_ktls": false 00:15:35.949 } 00:15:35.949 } 00:15:35.949 ] 00:15:35.949 }, 00:15:35.949 { 00:15:35.949 "subsystem": "vmd", 00:15:35.949 "config": [] 00:15:35.949 }, 00:15:35.949 { 00:15:35.949 "subsystem": "accel", 00:15:35.949 "config": [ 00:15:35.949 { 00:15:35.949 "method": "accel_set_options", 00:15:35.949 "params": { 00:15:35.949 "small_cache_size": 128, 00:15:35.949 "large_cache_size": 16, 00:15:35.949 "task_count": 2048, 00:15:35.949 "sequence_count": 2048, 00:15:35.949 "buf_count": 2048 00:15:35.949 } 00:15:35.949 } 00:15:35.949 ] 00:15:35.949 }, 00:15:35.949 { 00:15:35.949 "subsystem": "bdev", 00:15:35.949 "config": [ 00:15:35.949 { 00:15:35.949 "method": "bdev_set_options", 00:15:35.949 "params": { 00:15:35.949 "bdev_io_pool_size": 65535, 00:15:35.949 "bdev_io_cache_size": 256, 00:15:35.949 "bdev_auto_examine": true, 00:15:35.949 "iobuf_small_cache_size": 128, 00:15:35.949 "iobuf_large_cache_size": 16 00:15:35.949 } 00:15:35.949 }, 00:15:35.949 { 00:15:35.949 "method": "bdev_raid_set_options", 00:15:35.949 "params": { 00:15:35.949 "process_window_size_kb": 1024, 00:15:35.949 "process_max_bandwidth_mb_sec": 0 00:15:35.949 } 00:15:35.949 }, 00:15:35.949 { 00:15:35.949 "method": "bdev_iscsi_set_options", 00:15:35.949 "params": { 00:15:35.949 "timeout_sec": 30 00:15:35.949 } 00:15:35.949 }, 00:15:35.949 { 00:15:35.949 "method": "bdev_nvme_set_options", 00:15:35.949 "params": { 00:15:35.949 "action_on_timeout": "none", 00:15:35.949 "timeout_us": 0, 00:15:35.949 "timeout_admin_us": 0, 00:15:35.949 "keep_alive_timeout_ms": 10000, 00:15:35.949 "arbitration_burst": 0, 00:15:35.949 "low_priority_weight": 0, 00:15:35.949 "medium_priority_weight": 0, 00:15:35.949 "high_priority_weight": 0, 00:15:35.949 "nvme_adminq_poll_period_us": 10000, 00:15:35.949 "nvme_ioq_poll_period_us": 0, 00:15:35.949 "io_queue_requests": 0, 00:15:35.949 "delay_cmd_submit": true, 00:15:35.949 "transport_retry_count": 4, 00:15:35.949 "bdev_retry_count": 3, 00:15:35.949 "transport_ack_timeout": 0, 00:15:35.949 "ctrlr_loss_timeout_sec": 0, 00:15:35.949 "reconnect_delay_sec": 0, 00:15:35.949 "fast_io_fail_timeout_sec": 0, 00:15:35.949 "disable_auto_failback": false, 00:15:35.949 "generate_uuids": false, 00:15:35.949 "transport_tos": 0, 00:15:35.949 "nvme_error_stat": false, 00:15:35.949 "rdma_srq_size": 0, 00:15:35.949 "io_path_stat": false, 00:15:35.949 "allow_accel_sequence": false, 00:15:35.949 "rdma_max_cq_size": 0, 00:15:35.949 "rdma_cm_event_timeout_ms": 0, 00:15:35.949 "dhchap_digests": [ 00:15:35.950 "sha256", 00:15:35.950 "sha384", 00:15:35.950 "sha512" 00:15:35.950 ], 00:15:35.950 "dhchap_dhgroups": [ 00:15:35.950 "null", 00:15:35.950 "ffdhe2048", 00:15:35.950 "ffdhe3072", 00:15:35.950 "ffdhe4096", 00:15:35.950 "ffdhe6144", 00:15:35.950 "ffdhe8192" 00:15:35.950 ] 00:15:35.950 } 00:15:35.950 }, 00:15:35.950 { 00:15:35.950 "method": "bdev_nvme_set_hotplug", 00:15:35.950 "params": { 00:15:35.950 "period_us": 100000, 00:15:35.950 "enable": false 00:15:35.950 } 00:15:35.950 }, 00:15:35.950 { 00:15:35.950 "method": "bdev_malloc_create", 00:15:35.950 "params": { 00:15:35.950 "name": "malloc0", 00:15:35.950 "num_blocks": 8192, 00:15:35.950 "block_size": 4096, 00:15:35.950 "physical_block_size": 4096, 00:15:35.950 "uuid": "4b15d99e-5289-4ee5-a408-baafd14ce174", 00:15:35.950 "optimal_io_boundary": 0, 00:15:35.950 "md_size": 0, 00:15:35.950 "dif_type": 0, 00:15:35.950 "dif_is_head_of_md": false, 00:15:35.950 "dif_pi_format": 0 00:15:35.950 } 00:15:35.950 }, 00:15:35.950 { 00:15:35.950 "method": "bdev_wait_for_examine" 00:15:35.950 } 00:15:35.950 ] 00:15:35.950 }, 00:15:35.950 { 00:15:35.950 "subsystem": "nbd", 00:15:35.950 "config": [] 00:15:35.950 }, 00:15:35.950 { 00:15:35.950 "subsystem": "scheduler", 00:15:35.950 "config": [ 00:15:35.950 { 00:15:35.950 "method": "framework_set_scheduler", 00:15:35.950 "params": { 00:15:35.950 "name": "static" 00:15:35.950 } 00:15:35.950 } 00:15:35.950 ] 00:15:35.950 }, 00:15:35.950 { 00:15:35.950 "subsystem": "nvmf", 00:15:35.950 "config": [ 00:15:35.950 { 00:15:35.950 "method": "nvmf_set_config", 00:15:35.950 "params": { 00:15:35.950 "discovery_filter": "match_any", 00:15:35.950 "admin_cmd_passthru": { 00:15:35.950 "identify_ctrlr": false 00:15:35.950 }, 00:15:35.950 "dhchap_digests": [ 00:15:35.950 "sha256", 00:15:35.950 "sha384", 00:15:35.950 "sha512" 00:15:35.950 ], 00:15:35.950 "dhchap_dhgroups": [ 00:15:35.950 "null", 00:15:35.950 "ffdhe2048", 00:15:35.950 "ffdhe3072", 00:15:35.950 "ffdhe4096", 00:15:35.950 "ffdhe6144", 00:15:35.950 "ffdhe8192" 00:15:35.950 ] 00:15:35.950 } 00:15:35.950 }, 00:15:35.950 { 00:15:35.950 "method": "nvmf_set_max_subsystems", 00:15:35.950 "params": { 00:15:35.950 "max_subsystems": 1024 00:15:35.950 } 00:15:35.950 }, 00:15:35.950 { 00:15:35.950 "method": "nvmf_set_crdt", 00:15:35.950 "params": { 00:15:35.950 "crdt1": 0, 00:15:35.950 "crdt2": 0, 00:15:35.950 "crdt3": 0 00:15:35.950 } 00:15:35.950 }, 00:15:35.950 { 00:15:35.950 "method": "nvmf_create_transport", 00:15:35.950 "params": { 00:15:35.950 "trtype": "TCP", 00:15:35.950 "max_queue_depth": 128, 00:15:35.950 "max_io_qpairs_per_ctrlr": 127, 00:15:35.950 "in_capsule_data_size": 4096, 00:15:35.950 "max_io_size": 131072, 00:15:35.950 "io_unit_size": 131072, 00:15:35.950 "max_aq_depth": 128, 00:15:35.950 "num_shared_buffers": 511, 00:15:35.950 "buf_cache_size": 4294967295, 00:15:35.950 "dif_insert_or_strip": false, 00:15:35.950 "zcopy": false, 00:15:35.950 "c2h_success": false, 00:15:35.950 "sock_priority": 0, 00:15:35.950 "abort_timeout_sec": 1, 00:15:35.950 "ack_timeout": 0, 00:15:35.950 "data_wr_pool_size": 0 00:15:35.950 } 00:15:35.950 }, 00:15:35.950 { 00:15:35.950 "method": "nvmf_create_subsystem", 00:15:35.950 "params": { 00:15:35.950 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.950 "allow_any_host": false, 00:15:35.950 "serial_number": "SPDK00000000000001", 00:15:35.950 "model_number": "SPDK bdev Controller", 00:15:35.950 "max_namespaces": 10, 00:15:35.950 "min_cntlid": 1, 00:15:35.950 "max_cntlid": 65519, 00:15:35.950 "ana_reporting": false 00:15:35.950 } 00:15:35.950 }, 00:15:35.950 { 00:15:35.950 "method": "nvmf_subsystem_add_host", 00:15:35.950 "params": { 00:15:35.950 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.950 "host": "nqn.2016-06.io.spdk:host1", 00:15:35.950 "psk": "key0" 00:15:35.950 } 00:15:35.950 }, 00:15:35.950 { 00:15:35.950 "method": "nvmf_subsystem_add_ns", 00:15:35.950 "params": { 00:15:35.950 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.950 "namespace": { 00:15:35.950 "nsid": 1, 00:15:35.950 "bdev_name": "malloc0", 00:15:35.950 "nguid": "4B15D99E52894EE5A408BAAFD14CE174", 00:15:35.950 "uuid": "4b15d99e-5289-4ee5-a408-baafd14ce174", 00:15:35.950 "no_auto_visible": false 00:15:35.950 } 00:15:35.950 } 00:15:35.950 }, 00:15:35.950 { 00:15:35.950 "method": "nvmf_subsystem_add_listener", 00:15:35.950 "params": { 00:15:35.950 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.950 "listen_address": { 00:15:35.950 "trtype": "TCP", 00:15:35.950 "adrfam": "IPv4", 00:15:35.950 "traddr": "10.0.0.3", 00:15:35.950 "trsvcid": "4420" 00:15:35.950 }, 00:15:35.950 "secure_channel": true 00:15:35.950 } 00:15:35.950 } 00:15:35.950 ] 00:15:35.950 } 00:15:35.950 ] 00:15:35.950 }' 00:15:35.950 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:36.210 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:36.210 "subsystems": [ 00:15:36.210 { 00:15:36.210 "subsystem": "keyring", 00:15:36.210 "config": [ 00:15:36.210 { 00:15:36.210 "method": "keyring_file_add_key", 00:15:36.210 "params": { 00:15:36.210 "name": "key0", 00:15:36.210 "path": "/tmp/tmp.hiYTAxHGEA" 00:15:36.210 } 00:15:36.210 } 00:15:36.210 ] 00:15:36.210 }, 00:15:36.210 { 00:15:36.210 "subsystem": "iobuf", 00:15:36.210 "config": [ 00:15:36.210 { 00:15:36.210 "method": "iobuf_set_options", 00:15:36.210 "params": { 00:15:36.210 "small_pool_count": 8192, 00:15:36.210 "large_pool_count": 1024, 00:15:36.210 "small_bufsize": 8192, 00:15:36.210 "large_bufsize": 135168 00:15:36.210 } 00:15:36.210 } 00:15:36.210 ] 00:15:36.210 }, 00:15:36.210 { 00:15:36.210 "subsystem": "sock", 00:15:36.210 "config": [ 00:15:36.210 { 00:15:36.210 "method": "sock_set_default_impl", 00:15:36.210 "params": { 00:15:36.210 "impl_name": "uring" 00:15:36.210 } 00:15:36.210 }, 00:15:36.210 { 00:15:36.210 "method": "sock_impl_set_options", 00:15:36.210 "params": { 00:15:36.210 "impl_name": "ssl", 00:15:36.210 "recv_buf_size": 4096, 00:15:36.210 "send_buf_size": 4096, 00:15:36.210 "enable_recv_pipe": true, 00:15:36.210 "enable_quickack": false, 00:15:36.210 "enable_placement_id": 0, 00:15:36.210 "enable_zerocopy_send_server": true, 00:15:36.210 "enable_zerocopy_send_client": false, 00:15:36.210 "zerocopy_threshold": 0, 00:15:36.210 "tls_version": 0, 00:15:36.210 "enable_ktls": false 00:15:36.210 } 00:15:36.210 }, 00:15:36.210 { 00:15:36.210 "method": "sock_impl_set_options", 00:15:36.210 "params": { 00:15:36.210 "impl_name": "posix", 00:15:36.210 "recv_buf_size": 2097152, 00:15:36.210 "send_buf_size": 2097152, 00:15:36.210 "enable_recv_pipe": true, 00:15:36.210 "enable_quickack": false, 00:15:36.210 "enable_placement_id": 0, 00:15:36.210 "enable_zerocopy_send_server": true, 00:15:36.210 "enable_zerocopy_send_client": false, 00:15:36.210 "zerocopy_threshold": 0, 00:15:36.210 "tls_version": 0, 00:15:36.210 "enable_ktls": false 00:15:36.210 } 00:15:36.210 }, 00:15:36.210 { 00:15:36.210 "method": "sock_impl_set_options", 00:15:36.210 "params": { 00:15:36.210 "impl_name": "uring", 00:15:36.210 "recv_buf_size": 2097152, 00:15:36.210 "send_buf_size": 2097152, 00:15:36.210 "enable_recv_pipe": true, 00:15:36.210 "enable_quickack": false, 00:15:36.210 "enable_placement_id": 0, 00:15:36.210 "enable_zerocopy_send_server": false, 00:15:36.210 "enable_zerocopy_send_client": false, 00:15:36.210 "zerocopy_threshold": 0, 00:15:36.210 "tls_version": 0, 00:15:36.210 "enable_ktls": false 00:15:36.210 } 00:15:36.210 } 00:15:36.210 ] 00:15:36.210 }, 00:15:36.210 { 00:15:36.210 "subsystem": "vmd", 00:15:36.210 "config": [] 00:15:36.210 }, 00:15:36.210 { 00:15:36.210 "subsystem": "accel", 00:15:36.210 "config": [ 00:15:36.210 { 00:15:36.210 "method": "accel_set_options", 00:15:36.210 "params": { 00:15:36.210 "small_cache_size": 128, 00:15:36.210 "large_cache_size": 16, 00:15:36.210 "task_count": 2048, 00:15:36.210 "sequence_count": 2048, 00:15:36.210 "buf_count": 2048 00:15:36.210 } 00:15:36.210 } 00:15:36.210 ] 00:15:36.210 }, 00:15:36.210 { 00:15:36.210 "subsystem": "bdev", 00:15:36.210 "config": [ 00:15:36.210 { 00:15:36.210 "method": "bdev_set_options", 00:15:36.210 "params": { 00:15:36.210 "bdev_io_pool_size": 65535, 00:15:36.210 "bdev_io_cache_size": 256, 00:15:36.210 "bdev_auto_examine": true, 00:15:36.210 "iobuf_small_cache_size": 128, 00:15:36.210 "iobuf_large_cache_size": 16 00:15:36.210 } 00:15:36.210 }, 00:15:36.210 { 00:15:36.210 "method": "bdev_raid_set_options", 00:15:36.210 "params": { 00:15:36.210 "process_window_size_kb": 1024, 00:15:36.210 "process_max_bandwidth_mb_sec": 0 00:15:36.210 } 00:15:36.210 }, 00:15:36.210 { 00:15:36.210 "method": "bdev_iscsi_set_options", 00:15:36.210 "params": { 00:15:36.210 "timeout_sec": 30 00:15:36.210 } 00:15:36.210 }, 00:15:36.210 { 00:15:36.210 "method": "bdev_nvme_set_options", 00:15:36.210 "params": { 00:15:36.210 "action_on_timeout": "none", 00:15:36.210 "timeout_us": 0, 00:15:36.210 "timeout_admin_us": 0, 00:15:36.210 "keep_alive_timeout_ms": 10000, 00:15:36.210 "arbitration_burst": 0, 00:15:36.210 "low_priority_weight": 0, 00:15:36.210 "medium_priority_weight": 0, 00:15:36.210 "high_priority_weight": 0, 00:15:36.210 "nvme_adminq_poll_period_us": 10000, 00:15:36.211 "nvme_ioq_poll_period_us": 0, 00:15:36.211 "io_queue_requests": 512, 00:15:36.211 "delay_cmd_submit": true, 00:15:36.211 "transport_retry_count": 4, 00:15:36.211 "bdev_retry_count": 3, 00:15:36.211 "transport_ack_timeout": 0, 00:15:36.211 "ctrlr_loss_timeout_sec": 0, 00:15:36.211 "reconnect_delay_sec": 0, 00:15:36.211 "fast_io_fail_timeout_sec": 0, 00:15:36.211 "disable_auto_failback": false, 00:15:36.211 "generate_uuids": false, 00:15:36.211 "transport_tos": 0, 00:15:36.211 "nvme_error_stat": false, 00:15:36.211 "rdma_srq_size": 0, 00:15:36.211 "io_path_stat": false, 00:15:36.211 "allow_accel_sequence": false, 00:15:36.211 "rdma_max_cq_size": 0, 00:15:36.211 "rdma_cm_event_timeout_ms": 0, 00:15:36.211 "dhchap_digests": [ 00:15:36.211 "sha256", 00:15:36.211 "sha384", 00:15:36.211 "sha512" 00:15:36.211 ], 00:15:36.211 "dhchap_dhgroups": [ 00:15:36.211 "null", 00:15:36.211 "ffdhe2048", 00:15:36.211 "ffdhe3072", 00:15:36.211 "ffdhe4096", 00:15:36.211 "ffdhe6144", 00:15:36.211 "ffdhe8192" 00:15:36.211 ] 00:15:36.211 } 00:15:36.211 }, 00:15:36.211 { 00:15:36.211 "method": "bdev_nvme_attach_controller", 00:15:36.211 "params": { 00:15:36.211 "name": "TLSTEST", 00:15:36.211 "trtype": "TCP", 00:15:36.211 "adrfam": "IPv4", 00:15:36.211 "traddr": "10.0.0.3", 00:15:36.211 "trsvcid": "4420", 00:15:36.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.211 "prchk_reftag": false, 00:15:36.211 "prchk_guard": false, 00:15:36.211 "ctrlr_loss_timeout_sec": 0, 00:15:36.211 "reconnect_delay_sec": 0, 00:15:36.211 "fast_io_fail_timeout_sec": 0, 00:15:36.211 "psk": "key0", 00:15:36.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.211 "hdgst": false, 00:15:36.211 "ddgst": false 00:15:36.211 } 00:15:36.211 }, 00:15:36.211 { 00:15:36.211 "method": "bdev_nvme_set_hotplug", 00:15:36.211 "params": { 00:15:36.211 "period_us": 100000, 00:15:36.211 "enable": false 00:15:36.211 } 00:15:36.211 }, 00:15:36.211 { 00:15:36.211 "method": "bdev_wait_for_examine" 00:15:36.211 } 00:15:36.211 ] 00:15:36.211 }, 00:15:36.211 { 00:15:36.211 "subsystem": "nbd", 00:15:36.211 "config": [] 00:15:36.211 } 00:15:36.211 ] 00:15:36.211 }' 00:15:36.211 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 84807 00:15:36.211 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84807 ']' 00:15:36.211 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84807 00:15:36.211 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:36.211 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:36.211 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84807 00:15:36.211 killing process with pid 84807 00:15:36.211 Received shutdown signal, test time was about 10.000000 seconds 00:15:36.211 00:15:36.211 Latency(us) 00:15:36.211 [2024-11-08T02:19:38.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.211 [2024-11-08T02:19:38.095Z] =================================================================================================================== 00:15:36.211 [2024-11-08T02:19:38.095Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:36.211 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:36.211 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:36.211 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84807' 00:15:36.211 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84807 00:15:36.211 02:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84807 00:15:36.211 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 84753 00:15:36.211 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84753 ']' 00:15:36.211 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84753 00:15:36.211 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:36.211 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:36.211 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84753 00:15:36.470 killing process with pid 84753 00:15:36.470 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:36.471 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:36.471 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84753' 00:15:36.471 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84753 00:15:36.471 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84753 00:15:36.471 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:36.471 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:36.471 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:36.471 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.471 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:36.471 "subsystems": [ 00:15:36.471 { 00:15:36.471 "subsystem": "keyring", 00:15:36.471 "config": [ 00:15:36.471 { 00:15:36.471 "method": "keyring_file_add_key", 00:15:36.471 "params": { 00:15:36.471 "name": "key0", 00:15:36.471 "path": "/tmp/tmp.hiYTAxHGEA" 00:15:36.471 } 00:15:36.471 } 00:15:36.471 ] 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "subsystem": "iobuf", 00:15:36.471 "config": [ 00:15:36.471 { 00:15:36.471 "method": "iobuf_set_options", 00:15:36.471 "params": { 00:15:36.471 "small_pool_count": 8192, 00:15:36.471 "large_pool_count": 1024, 00:15:36.471 "small_bufsize": 8192, 00:15:36.471 "large_bufsize": 135168 00:15:36.471 } 00:15:36.471 } 00:15:36.471 ] 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "subsystem": "sock", 00:15:36.471 "config": [ 00:15:36.471 { 00:15:36.471 "method": "sock_set_default_impl", 00:15:36.471 "params": { 00:15:36.471 "impl_name": "uring" 00:15:36.471 } 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "method": "sock_impl_set_options", 00:15:36.471 "params": { 00:15:36.471 "impl_name": "ssl", 00:15:36.471 "recv_buf_size": 4096, 00:15:36.471 "send_buf_size": 4096, 00:15:36.471 "enable_recv_pipe": true, 00:15:36.471 "enable_quickack": false, 00:15:36.471 "enable_placement_id": 0, 00:15:36.471 "enable_zerocopy_send_server": true, 00:15:36.471 "enable_zerocopy_send_client": false, 00:15:36.471 "zerocopy_threshold": 0, 00:15:36.471 "tls_version": 0, 00:15:36.471 "enable_ktls": false 00:15:36.471 } 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "method": "sock_impl_set_options", 00:15:36.471 "params": { 00:15:36.471 "impl_name": "posix", 00:15:36.471 "recv_buf_size": 2097152, 00:15:36.471 "send_buf_size": 2097152, 00:15:36.471 "enable_recv_pipe": true, 00:15:36.471 "enable_quickack": false, 00:15:36.471 "enable_placement_id": 0, 00:15:36.471 "enable_zerocopy_send_server": true, 00:15:36.471 "enable_zerocopy_send_client": false, 00:15:36.471 "zerocopy_threshold": 0, 00:15:36.471 "tls_version": 0, 00:15:36.471 "enable_ktls": false 00:15:36.471 } 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "method": "sock_impl_set_options", 00:15:36.471 "params": { 00:15:36.471 "impl_name": "uring", 00:15:36.471 "recv_buf_size": 2097152, 00:15:36.471 "send_buf_size": 2097152, 00:15:36.471 "enable_recv_pipe": true, 00:15:36.471 "enable_quickack": false, 00:15:36.471 "enable_placement_id": 0, 00:15:36.471 "enable_zerocopy_send_server": false, 00:15:36.471 "enable_zerocopy_send_client": false, 00:15:36.471 "zerocopy_threshold": 0, 00:15:36.471 "tls_version": 0, 00:15:36.471 "enable_ktls": false 00:15:36.471 } 00:15:36.471 } 00:15:36.471 ] 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "subsystem": "vmd", 00:15:36.471 "config": [] 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "subsystem": "accel", 00:15:36.471 "config": [ 00:15:36.471 { 00:15:36.471 "method": "accel_set_options", 00:15:36.471 "params": { 00:15:36.471 "small_cache_size": 128, 00:15:36.471 "large_cache_size": 16, 00:15:36.471 "task_count": 2048, 00:15:36.471 "sequence_count": 2048, 00:15:36.471 "buf_count": 2048 00:15:36.471 } 00:15:36.471 } 00:15:36.471 ] 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "subsystem": "bdev", 00:15:36.471 "config": [ 00:15:36.471 { 00:15:36.471 "method": "bdev_set_options", 00:15:36.471 "params": { 00:15:36.471 "bdev_io_pool_size": 65535, 00:15:36.471 "bdev_io_cache_size": 256, 00:15:36.471 "bdev_auto_examine": true, 00:15:36.471 "iobuf_small_cache_size": 128, 00:15:36.471 "iobuf_large_cache_size": 16 00:15:36.471 } 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "method": "bdev_raid_set_options", 00:15:36.471 "params": { 00:15:36.471 "process_window_size_kb": 1024, 00:15:36.471 "process_max_bandwidth_mb_sec": 0 00:15:36.471 } 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "method": "bdev_iscsi_set_options", 00:15:36.471 "params": { 00:15:36.471 "timeout_sec": 30 00:15:36.471 } 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "method": "bdev_nvme_set_options", 00:15:36.471 "params": { 00:15:36.471 "action_on_timeout": "none", 00:15:36.471 "timeout_us": 0, 00:15:36.471 "timeout_admin_us": 0, 00:15:36.471 "keep_alive_timeout_ms": 10000, 00:15:36.471 "arbitration_burst": 0, 00:15:36.471 "low_priority_weight": 0, 00:15:36.471 "medium_priority_weight": 0, 00:15:36.471 "high_priority_weight": 0, 00:15:36.471 "nvme_adminq_poll_period_us": 10000, 00:15:36.471 "nvme_ioq_poll_period_us": 0, 00:15:36.471 "io_queue_requests": 0, 00:15:36.471 "delay_cmd_submit": true, 00:15:36.471 "transport_retry_count": 4, 00:15:36.471 "bdev_retry_count": 3, 00:15:36.471 "transport_ack_timeout": 0, 00:15:36.471 "ctrlr_loss_timeout_sec": 0, 00:15:36.471 "reconnect_delay_sec": 0, 00:15:36.471 "fast_io_fail_timeout_sec": 0, 00:15:36.471 "disable_auto_failback": false, 00:15:36.471 "generate_uuids": false, 00:15:36.471 "transport_tos": 0, 00:15:36.471 "nvme_error_stat": false, 00:15:36.471 "rdma_srq_size": 0, 00:15:36.471 "io_path_stat": false, 00:15:36.471 "allow_accel_sequence": false, 00:15:36.471 "rdma_max_cq_size": 0, 00:15:36.471 "rdma_cm_event_timeout_ms": 0, 00:15:36.471 "dhchap_digests": [ 00:15:36.471 "sha256", 00:15:36.471 "sha384", 00:15:36.471 "sha512" 00:15:36.471 ], 00:15:36.471 "dhchap_dhgroups": [ 00:15:36.471 "null", 00:15:36.471 "ffdhe2048", 00:15:36.471 "ffdhe3072", 00:15:36.471 "ffdhe4096", 00:15:36.471 "ffdhe6144", 00:15:36.471 "ffdhe8192" 00:15:36.471 ] 00:15:36.471 } 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "method": "bdev_nvme_set_hotplug", 00:15:36.471 "params": { 00:15:36.471 "period_us": 100000, 00:15:36.471 "enable": false 00:15:36.471 } 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "method": "bdev_malloc_create", 00:15:36.471 "params": { 00:15:36.471 "name": "malloc0", 00:15:36.471 "num_blocks": 8192, 00:15:36.471 "block_size": 4096, 00:15:36.471 "physical_block_size": 4096, 00:15:36.471 "uuid": "4b15d99e-5289-4ee5-a408-baafd14ce174", 00:15:36.471 "optimal_io_boundary": 0, 00:15:36.471 "md_size": 0, 00:15:36.471 "dif_type": 0, 00:15:36.471 "dif_is_head_of_md": false, 00:15:36.471 "dif_pi_format": 0 00:15:36.471 } 00:15:36.471 }, 00:15:36.471 { 00:15:36.471 "method": "bdev_wait_for_examine" 00:15:36.471 } 00:15:36.471 ] 00:15:36.471 }, 00:15:36.471 { 00:15:36.472 "subsystem": "nbd", 00:15:36.472 "config": [] 00:15:36.472 }, 00:15:36.472 { 00:15:36.472 "subsystem": "scheduler", 00:15:36.472 "config": [ 00:15:36.472 { 00:15:36.472 "method": "framework_set_scheduler", 00:15:36.472 "params": { 00:15:36.472 "name": "static" 00:15:36.472 } 00:15:36.472 } 00:15:36.472 ] 00:15:36.472 }, 00:15:36.472 { 00:15:36.472 "subsystem": "nvmf", 00:15:36.472 "config": [ 00:15:36.472 { 00:15:36.472 "method": "nvmf_set_config", 00:15:36.472 "params": { 00:15:36.472 "discovery_filter": "match_any", 00:15:36.472 "admin_cmd_passthru": { 00:15:36.472 "identify_ctrlr": false 00:15:36.472 }, 00:15:36.472 "dhchap_digests": [ 00:15:36.472 "sha256", 00:15:36.472 "sha384", 00:15:36.472 "sha512" 00:15:36.472 ], 00:15:36.472 "dhchap_dhgroups": [ 00:15:36.472 "null", 00:15:36.472 "ffdhe2048", 00:15:36.472 "ffdhe3072", 00:15:36.472 "ffdhe4096", 00:15:36.472 "ffdhe6144", 00:15:36.472 "ffdhe8192" 00:15:36.472 ] 00:15:36.472 } 00:15:36.472 }, 00:15:36.472 { 00:15:36.472 "method": "nvmf_set_max_subsystems", 00:15:36.472 "params": { 00:15:36.472 "max_subsystems": 1024 00:15:36.472 } 00:15:36.472 }, 00:15:36.472 { 00:15:36.472 "method": "nvmf_set_crdt", 00:15:36.472 "params": { 00:15:36.472 "crdt1": 0, 00:15:36.472 "crdt2": 0, 00:15:36.472 "crdt3": 0 00:15:36.472 } 00:15:36.472 }, 00:15:36.472 { 00:15:36.472 "method": "nvmf_create_transport", 00:15:36.472 "params": { 00:15:36.472 "trtype": "TCP", 00:15:36.472 "max_queue_depth": 128, 00:15:36.472 "max_io_qpairs_per_ctrlr": 127, 00:15:36.472 "in_capsule_data_size": 4096, 00:15:36.472 "max_io_size": 131072, 00:15:36.472 "io_unit_size": 131072, 00:15:36.472 "max_aq_depth": 128, 00:15:36.472 "num_shared_buffers": 511, 00:15:36.472 "buf_cache_size": 4294967295, 00:15:36.472 "dif_insert_or_strip": false, 00:15:36.472 "zcopy": false, 00:15:36.472 "c2h_success": false, 00:15:36.472 "sock_priority": 0, 00:15:36.472 "abort_timeout_sec": 1, 00:15:36.472 "ack_timeout": 0, 00:15:36.472 "data_wr_pool_size": 0 00:15:36.472 } 00:15:36.472 }, 00:15:36.472 { 00:15:36.472 "method": "nvmf_create_subsystem", 00:15:36.472 "params": { 00:15:36.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.472 "allow_any_host": false, 00:15:36.472 "serial_number": "SPDK00000000000001", 00:15:36.472 "model_number": "SPDK bdev Controller", 00:15:36.472 "max_namespaces": 10, 00:15:36.472 "min_cntlid": 1, 00:15:36.472 "max_cntlid": 65519, 00:15:36.472 "ana_reporting": false 00:15:36.472 } 00:15:36.472 }, 00:15:36.472 { 00:15:36.472 "method": "nvmf_subsystem_add_host", 00:15:36.472 "params": { 00:15:36.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.472 "host": "nqn.2016-06.io.spdk:host1", 00:15:36.472 "psk": "key0" 00:15:36.472 } 00:15:36.472 }, 00:15:36.472 { 00:15:36.472 "method": "nvmf_subsystem_add_ns", 00:15:36.472 "params": { 00:15:36.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.472 "namespace": { 00:15:36.472 "nsid": 1, 00:15:36.472 "bdev_name": "malloc0", 00:15:36.472 "nguid": "4B15D99E52894EE5A408BAAFD14CE174", 00:15:36.472 "uuid": "4b15d99e-5289-4ee5-a408-baafd14ce174", 00:15:36.472 "no_auto_visible": false 00:15:36.472 } 00:15:36.472 } 00:15:36.472 }, 00:15:36.472 { 00:15:36.472 "method": "nvmf_subsystem_add_listener", 00:15:36.472 "params": { 00:15:36.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.472 "listen_address": { 00:15:36.472 "trtype": "TCP", 00:15:36.472 "adrfam": "IPv4", 00:15:36.472 "traddr": "10.0.0.3", 00:15:36.472 "trsvcid": "4420" 00:15:36.472 }, 00:15:36.472 "secure_channel": true 00:15:36.472 } 00:15:36.472 } 00:15:36.472 ] 00:15:36.472 } 00:15:36.472 ] 00:15:36.472 }' 00:15:36.472 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84849 00:15:36.472 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:36.472 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84849 00:15:36.472 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84849 ']' 00:15:36.472 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.472 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:36.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.472 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.472 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:36.472 02:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.472 [2024-11-08 02:19:38.324954] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:36.472 [2024-11-08 02:19:38.325050] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.732 [2024-11-08 02:19:38.465556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.732 [2024-11-08 02:19:38.499558] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.732 [2024-11-08 02:19:38.499608] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.732 [2024-11-08 02:19:38.499620] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.732 [2024-11-08 02:19:38.499628] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.732 [2024-11-08 02:19:38.499636] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.732 [2024-11-08 02:19:38.499709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.991 [2024-11-08 02:19:38.642233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:36.991 [2024-11-08 02:19:38.696575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.991 [2024-11-08 02:19:38.734977] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:36.991 [2024-11-08 02:19:38.735384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:37.560 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:37.560 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:37.560 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:37.560 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:37.560 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.560 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.560 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84881 00:15:37.560 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84881 /var/tmp/bdevperf.sock 00:15:37.560 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:37.560 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84881 ']' 00:15:37.560 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:37.560 "subsystems": [ 00:15:37.560 { 00:15:37.560 "subsystem": "keyring", 00:15:37.560 "config": [ 00:15:37.560 { 00:15:37.560 "method": "keyring_file_add_key", 00:15:37.560 "params": { 00:15:37.560 "name": "key0", 00:15:37.560 "path": "/tmp/tmp.hiYTAxHGEA" 00:15:37.560 } 00:15:37.560 } 00:15:37.560 ] 00:15:37.560 }, 00:15:37.560 { 00:15:37.560 "subsystem": "iobuf", 00:15:37.560 "config": [ 00:15:37.560 { 00:15:37.560 "method": "iobuf_set_options", 00:15:37.560 "params": { 00:15:37.560 "small_pool_count": 8192, 00:15:37.560 "large_pool_count": 1024, 00:15:37.560 "small_bufsize": 8192, 00:15:37.560 "large_bufsize": 135168 00:15:37.560 } 00:15:37.560 } 00:15:37.560 ] 00:15:37.560 }, 00:15:37.560 { 00:15:37.560 "subsystem": "sock", 00:15:37.560 "config": [ 00:15:37.560 { 00:15:37.560 "method": "sock_set_default_impl", 00:15:37.560 "params": { 00:15:37.560 "impl_name": "uring" 00:15:37.560 } 00:15:37.560 }, 00:15:37.560 { 00:15:37.560 "method": "sock_impl_set_options", 00:15:37.560 "params": { 00:15:37.560 "impl_name": "ssl", 00:15:37.560 "recv_buf_size": 4096, 00:15:37.560 "send_buf_size": 4096, 00:15:37.560 "enable_recv_pipe": true, 00:15:37.560 "enable_quickack": false, 00:15:37.560 "enable_placement_id": 0, 00:15:37.560 "enable_zerocopy_send_server": true, 00:15:37.560 "enable_zerocopy_send_client": false, 00:15:37.560 "zerocopy_threshold": 0, 00:15:37.560 "tls_version": 0, 00:15:37.560 "enable_ktls": false 00:15:37.560 } 00:15:37.560 }, 00:15:37.560 { 00:15:37.560 "method": "sock_impl_set_options", 00:15:37.560 "params": { 00:15:37.560 "impl_name": "posix", 00:15:37.560 "recv_buf_size": 2097152, 00:15:37.560 "send_buf_size": 2097152, 00:15:37.560 "enable_recv_pipe": true, 00:15:37.560 "enable_quickack": false, 00:15:37.560 "enable_placement_id": 0, 00:15:37.560 "enable_zerocopy_send_server": true, 00:15:37.560 "enable_zerocopy_send_client": false, 00:15:37.560 "zerocopy_threshold": 0, 00:15:37.560 "tls_version": 0, 00:15:37.560 "enable_ktls": false 00:15:37.560 } 00:15:37.560 }, 00:15:37.560 { 00:15:37.560 "method": "sock_impl_set_options", 00:15:37.560 "params": { 00:15:37.560 "impl_name": "uring", 00:15:37.560 "recv_buf_size": 2097152, 00:15:37.560 "send_buf_size": 2097152, 00:15:37.560 "enable_recv_pipe": true, 00:15:37.560 "enable_quickack": false, 00:15:37.560 "enable_placement_id": 0, 00:15:37.560 "enable_zerocopy_send_server": false, 00:15:37.560 "enable_zerocopy_send_client": false, 00:15:37.560 "zerocopy_threshold": 0, 00:15:37.560 "tls_version": 0, 00:15:37.560 "enable_ktls": false 00:15:37.560 } 00:15:37.560 } 00:15:37.560 ] 00:15:37.560 }, 00:15:37.560 { 00:15:37.560 "subsystem": "vmd", 00:15:37.560 "config": [] 00:15:37.560 }, 00:15:37.560 { 00:15:37.560 "subsystem": "accel", 00:15:37.560 "config": [ 00:15:37.560 { 00:15:37.560 "method": "accel_set_options", 00:15:37.560 "params": { 00:15:37.560 "small_cache_size": 128, 00:15:37.560 "large_cache_size": 16, 00:15:37.560 "task_count": 2048, 00:15:37.560 "sequence_count": 2048, 00:15:37.560 "buf_count": 2048 00:15:37.560 } 00:15:37.560 } 00:15:37.560 ] 00:15:37.560 }, 00:15:37.560 { 00:15:37.560 "subsystem": "bdev", 00:15:37.560 "config": [ 00:15:37.560 { 00:15:37.560 "method": "bdev_set_options", 00:15:37.560 "params": { 00:15:37.560 "bdev_io_pool_size": 65535, 00:15:37.560 "bdev_io_cache_size": 256, 00:15:37.560 "bdev_auto_examine": true, 00:15:37.560 "iobuf_small_cache_size": 128, 00:15:37.560 "iobuf_large_cache_size": 16 00:15:37.560 } 00:15:37.560 }, 00:15:37.560 { 00:15:37.560 "method": "bdev_raid_set_options", 00:15:37.560 "params": { 00:15:37.560 "process_window_size_kb": 1024, 00:15:37.560 "process_max_bandwidth_mb_sec": 0 00:15:37.560 } 00:15:37.560 }, 00:15:37.560 { 00:15:37.560 "method": "bdev_iscsi_set_options", 00:15:37.560 "params": { 00:15:37.561 "timeout_sec": 30 00:15:37.561 } 00:15:37.561 }, 00:15:37.561 { 00:15:37.561 "method": "bdev_nvme_set_options", 00:15:37.561 "params": { 00:15:37.561 "action_on_timeout": "none", 00:15:37.561 "timeout_us": 0, 00:15:37.561 "timeout_admin_us": 0, 00:15:37.561 "keep_alive_timeout_ms": 10000, 00:15:37.561 "arbitration_burst": 0, 00:15:37.561 "low_priority_weight": 0, 00:15:37.561 "medium_priority_weight": 0, 00:15:37.561 "high_priority_weight": 0, 00:15:37.561 "nvme_adminq_poll_period_us": 10000, 00:15:37.561 "nvme_ioq_poll_period_us": 0, 00:15:37.561 "io_queue_requests": 512, 00:15:37.561 "delay_cmd_submit": true, 00:15:37.561 "transport_retry_count": 4, 00:15:37.561 "bdev_retry_count": 3, 00:15:37.561 "transport_ack_timeout": 0, 00:15:37.561 "ctrlr_loss_timeout_sec": 0, 00:15:37.561 "reconnect_delay_sec": 0, 00:15:37.561 "fast_io_fail_timeout_sec": 0, 00:15:37.561 "disable_auto_failback": false, 00:15:37.561 "generate_uuids": false, 00:15:37.561 "transport_tos": 0, 00:15:37.561 "nvme_error_stat": false, 00:15:37.561 "rdma_srq_size": 0, 00:15:37.561 "io_path_stat": false, 00:15:37.561 "allow_accel_sequence": false, 00:15:37.561 "rdma_max_cq_size": 0, 00:15:37.561 "rdma_cm_event_timeout_ms": 0, 00:15:37.561 "dhchap_digests": [ 00:15:37.561 "sha256", 00:15:37.561 "sha384", 00:15:37.561 "sha512" 00:15:37.561 ], 00:15:37.561 "dhchap_dhgroups": [ 00:15:37.561 "null", 00:15:37.561 "ffdhe2048", 00:15:37.561 "ffdhe3072", 00:15:37.561 "ffdhe4096", 00:15:37.561 "ffdhe6144", 00:15:37.561 "ffdhe8192" 00:15:37.561 ] 00:15:37.561 } 00:15:37.561 }, 00:15:37.561 { 00:15:37.561 "method": "bdev_nvme_attach_controller", 00:15:37.561 "params": { 00:15:37.561 "name": "TLSTEST", 00:15:37.561 "trtype": "TCP", 00:15:37.561 "adrfam": "IPv4", 00:15:37.561 "traddr": "10.0.0.3", 00:15:37.561 "trsvcid": "4420", 00:15:37.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.561 "prchk_reftag": false, 00:15:37.561 "prchk_guard": false, 00:15:37.561 "ctrlr_loss_timeout_sec": 0, 00:15:37.561 "reconnect_delay_sec": 0, 00:15:37.561 "fast_io_fail_timeout_sec": 0, 00:15:37.561 "psk": "key0", 00:15:37.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.561 "hdgst": false, 00:15:37.561 "ddgst": false 00:15:37.561 } 00:15:37.561 }, 00:15:37.561 { 00:15:37.561 "method": "bdev_nvme_set_hotplug", 00:15:37.561 "params": { 00:15:37.561 "period_us": 100000, 00:15:37.561 "enable": false 00:15:37.561 } 00:15:37.561 }, 00:15:37.561 { 00:15:37.561 "method": "bdev_wait_for_examine" 00:15:37.561 } 00:15:37.561 ] 00:15:37.561 }, 00:15:37.561 { 00:15:37.561 "subsystem": "nbd", 00:15:37.561 "config": [] 00:15:37.561 } 00:15:37.561 ] 00:15:37.561 }' 00:15:37.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.561 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.561 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:37.561 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.561 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:37.561 02:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.561 [2024-11-08 02:19:39.411896] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:37.561 [2024-11-08 02:19:39.411977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84881 ] 00:15:37.820 [2024-11-08 02:19:39.548471] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.820 [2024-11-08 02:19:39.591605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.079 [2024-11-08 02:19:39.711800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:38.079 [2024-11-08 02:19:39.744236] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:38.645 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:38.645 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:38.645 02:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:38.903 Running I/O for 10 seconds... 00:15:40.774 4104.00 IOPS, 16.03 MiB/s [2024-11-08T02:19:44.036Z] 4196.50 IOPS, 16.39 MiB/s [2024-11-08T02:19:44.971Z] 4189.33 IOPS, 16.36 MiB/s [2024-11-08T02:19:45.919Z] 4248.50 IOPS, 16.60 MiB/s [2024-11-08T02:19:46.961Z] 4130.00 IOPS, 16.13 MiB/s [2024-11-08T02:19:47.897Z] 4073.50 IOPS, 15.91 MiB/s [2024-11-08T02:19:48.831Z] 4125.71 IOPS, 16.12 MiB/s [2024-11-08T02:19:49.766Z] 4114.75 IOPS, 16.07 MiB/s [2024-11-08T02:19:50.699Z] 4144.56 IOPS, 16.19 MiB/s [2024-11-08T02:19:50.699Z] 4152.40 IOPS, 16.22 MiB/s 00:15:48.815 Latency(us) 00:15:48.815 [2024-11-08T02:19:50.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.815 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:48.815 Verification LBA range: start 0x0 length 0x2000 00:15:48.815 TLSTESTn1 : 10.01 4158.92 16.25 0.00 0.00 30726.15 4289.63 24069.59 00:15:48.815 [2024-11-08T02:19:50.699Z] =================================================================================================================== 00:15:48.815 [2024-11-08T02:19:50.699Z] Total : 4158.92 16.25 0.00 0.00 30726.15 4289.63 24069.59 00:15:48.815 { 00:15:48.815 "results": [ 00:15:48.815 { 00:15:48.815 "job": "TLSTESTn1", 00:15:48.815 "core_mask": "0x4", 00:15:48.815 "workload": "verify", 00:15:48.815 "status": "finished", 00:15:48.815 "verify_range": { 00:15:48.815 "start": 0, 00:15:48.815 "length": 8192 00:15:48.815 }, 00:15:48.815 "queue_depth": 128, 00:15:48.815 "io_size": 4096, 00:15:48.815 "runtime": 10.013904, 00:15:48.815 "iops": 4158.917441189769, 00:15:48.815 "mibps": 16.245771254647536, 00:15:48.815 "io_failed": 0, 00:15:48.815 "io_timeout": 0, 00:15:48.815 "avg_latency_us": 30726.150477716397, 00:15:48.815 "min_latency_us": 4289.629090909091, 00:15:48.815 "max_latency_us": 24069.585454545453 00:15:48.815 } 00:15:48.815 ], 00:15:48.815 "core_count": 1 00:15:48.815 } 00:15:48.815 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:48.815 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84881 00:15:48.815 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84881 ']' 00:15:48.815 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84881 00:15:48.815 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:48.815 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:48.815 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84881 00:15:49.073 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:49.073 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:49.073 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84881' 00:15:49.073 killing process with pid 84881 00:15:49.074 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84881 00:15:49.074 Received shutdown signal, test time was about 10.000000 seconds 00:15:49.074 00:15:49.074 Latency(us) 00:15:49.074 [2024-11-08T02:19:50.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.074 [2024-11-08T02:19:50.958Z] =================================================================================================================== 00:15:49.074 [2024-11-08T02:19:50.958Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:49.074 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84881 00:15:49.074 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84849 00:15:49.074 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84849 ']' 00:15:49.074 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84849 00:15:49.074 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:49.074 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:49.074 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84849 00:15:49.074 killing process with pid 84849 00:15:49.074 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:49.074 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:49.074 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84849' 00:15:49.074 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84849 00:15:49.074 02:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84849 00:15:49.332 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:49.332 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:49.332 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:49.332 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:49.332 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:49.332 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=85014 00:15:49.332 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 85014 00:15:49.332 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85014 ']' 00:15:49.332 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.332 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:49.332 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.332 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:49.332 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:49.332 [2024-11-08 02:19:51.108253] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:49.332 [2024-11-08 02:19:51.109321] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.590 [2024-11-08 02:19:51.246008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.590 [2024-11-08 02:19:51.277953] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.590 [2024-11-08 02:19:51.278008] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.590 [2024-11-08 02:19:51.278035] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.590 [2024-11-08 02:19:51.278042] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.590 [2024-11-08 02:19:51.278048] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.590 [2024-11-08 02:19:51.278072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.590 [2024-11-08 02:19:51.306080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:49.590 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.590 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:49.590 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:49.590 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:49.590 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:49.590 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.590 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.hiYTAxHGEA 00:15:49.590 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hiYTAxHGEA 00:15:49.590 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:49.848 [2024-11-08 02:19:51.614818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.848 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:50.107 02:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:50.365 [2024-11-08 02:19:52.146969] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:50.365 [2024-11-08 02:19:52.147302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:50.365 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:50.624 malloc0 00:15:50.624 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:50.884 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hiYTAxHGEA 00:15:51.142 02:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:51.400 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=85062 00:15:51.400 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:51.400 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:51.400 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 85062 /var/tmp/bdevperf.sock 00:15:51.400 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85062 ']' 00:15:51.400 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:51.401 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.401 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:51.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:51.401 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.401 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.401 [2024-11-08 02:19:53.263605] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:51.401 [2024-11-08 02:19:53.263991] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85062 ] 00:15:51.659 [2024-11-08 02:19:53.409307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.659 [2024-11-08 02:19:53.452566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.659 [2024-11-08 02:19:53.486346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:51.659 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.659 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:51.659 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hiYTAxHGEA 00:15:51.917 02:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:52.174 [2024-11-08 02:19:53.987810] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:52.174 nvme0n1 00:15:52.432 02:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:52.432 Running I/O for 1 seconds... 00:15:53.368 4033.00 IOPS, 15.75 MiB/s 00:15:53.368 Latency(us) 00:15:53.368 [2024-11-08T02:19:55.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.368 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:53.368 Verification LBA range: start 0x0 length 0x2000 00:15:53.368 nvme0n1 : 1.02 4063.42 15.87 0.00 0.00 31068.67 7000.44 19541.64 00:15:53.368 [2024-11-08T02:19:55.252Z] =================================================================================================================== 00:15:53.368 [2024-11-08T02:19:55.252Z] Total : 4063.42 15.87 0.00 0.00 31068.67 7000.44 19541.64 00:15:53.368 { 00:15:53.368 "results": [ 00:15:53.368 { 00:15:53.368 "job": "nvme0n1", 00:15:53.368 "core_mask": "0x2", 00:15:53.368 "workload": "verify", 00:15:53.368 "status": "finished", 00:15:53.368 "verify_range": { 00:15:53.368 "start": 0, 00:15:53.368 "length": 8192 00:15:53.368 }, 00:15:53.368 "queue_depth": 128, 00:15:53.368 "io_size": 4096, 00:15:53.368 "runtime": 1.024013, 00:15:53.368 "iops": 4063.424976050109, 00:15:53.368 "mibps": 15.872753812695738, 00:15:53.368 "io_failed": 0, 00:15:53.368 "io_timeout": 0, 00:15:53.368 "avg_latency_us": 31068.668744838433, 00:15:53.368 "min_latency_us": 7000.436363636363, 00:15:53.368 "max_latency_us": 19541.643636363635 00:15:53.368 } 00:15:53.368 ], 00:15:53.368 "core_count": 1 00:15:53.368 } 00:15:53.368 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 85062 00:15:53.368 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85062 ']' 00:15:53.368 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85062 00:15:53.368 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:53.368 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.368 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85062 00:15:53.368 killing process with pid 85062 00:15:53.368 Received shutdown signal, test time was about 1.000000 seconds 00:15:53.368 00:15:53.368 Latency(us) 00:15:53.368 [2024-11-08T02:19:55.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.368 [2024-11-08T02:19:55.252Z] =================================================================================================================== 00:15:53.368 [2024-11-08T02:19:55.252Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:53.368 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:53.368 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:53.368 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85062' 00:15:53.368 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85062 00:15:53.368 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85062 00:15:53.627 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 85014 00:15:53.627 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85014 ']' 00:15:53.627 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85014 00:15:53.627 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:53.627 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.627 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85014 00:15:53.627 killing process with pid 85014 00:15:53.627 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:53.627 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:53.627 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85014' 00:15:53.627 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85014 00:15:53.627 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85014 00:15:53.887 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:53.887 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:53.887 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:53.887 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.887 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:53.887 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=85106 00:15:53.887 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 85106 00:15:53.887 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85106 ']' 00:15:53.887 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.887 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:53.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.887 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.887 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:53.887 02:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.887 [2024-11-08 02:19:55.670400] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:53.887 [2024-11-08 02:19:55.670550] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.146 [2024-11-08 02:19:55.812062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.146 [2024-11-08 02:19:55.848212] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.146 [2024-11-08 02:19:55.848344] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.146 [2024-11-08 02:19:55.848356] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.146 [2024-11-08 02:19:55.848364] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.146 [2024-11-08 02:19:55.848371] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.146 [2024-11-08 02:19:55.848405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.146 [2024-11-08 02:19:55.878879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.082 [2024-11-08 02:19:56.648701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.082 malloc0 00:15:55.082 [2024-11-08 02:19:56.688386] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:55.082 [2024-11-08 02:19:56.688583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:55.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=85138 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 85138 /var/tmp/bdevperf.sock 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85138 ']' 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:55.082 02:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.082 [2024-11-08 02:19:56.767675] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:55.083 [2024-11-08 02:19:56.767936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85138 ] 00:15:55.083 [2024-11-08 02:19:56.901225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.083 [2024-11-08 02:19:56.936001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.342 [2024-11-08 02:19:56.964923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:55.342 02:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.342 02:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:55.342 02:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hiYTAxHGEA 00:15:55.600 02:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:55.859 [2024-11-08 02:19:57.581413] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:55.859 nvme0n1 00:15:55.859 02:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:56.118 Running I/O for 1 seconds... 00:15:57.055 3929.00 IOPS, 15.35 MiB/s 00:15:57.055 Latency(us) 00:15:57.055 [2024-11-08T02:19:58.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.055 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:57.055 Verification LBA range: start 0x0 length 0x2000 00:15:57.055 nvme0n1 : 1.02 3974.80 15.53 0.00 0.00 31847.36 5719.51 24069.59 00:15:57.055 [2024-11-08T02:19:58.939Z] =================================================================================================================== 00:15:57.055 [2024-11-08T02:19:58.939Z] Total : 3974.80 15.53 0.00 0.00 31847.36 5719.51 24069.59 00:15:57.055 { 00:15:57.055 "results": [ 00:15:57.055 { 00:15:57.055 "job": "nvme0n1", 00:15:57.055 "core_mask": "0x2", 00:15:57.055 "workload": "verify", 00:15:57.055 "status": "finished", 00:15:57.055 "verify_range": { 00:15:57.055 "start": 0, 00:15:57.055 "length": 8192 00:15:57.055 }, 00:15:57.055 "queue_depth": 128, 00:15:57.055 "io_size": 4096, 00:15:57.055 "runtime": 1.020932, 00:15:57.055 "iops": 3974.799496930256, 00:15:57.055 "mibps": 15.526560534883812, 00:15:57.055 "io_failed": 0, 00:15:57.055 "io_timeout": 0, 00:15:57.055 "avg_latency_us": 31847.36073121556, 00:15:57.055 "min_latency_us": 5719.505454545455, 00:15:57.055 "max_latency_us": 24069.585454545453 00:15:57.055 } 00:15:57.055 ], 00:15:57.055 "core_count": 1 00:15:57.055 } 00:15:57.055 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:57.055 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.055 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:57.314 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.314 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:57.314 "subsystems": [ 00:15:57.314 { 00:15:57.314 "subsystem": "keyring", 00:15:57.314 "config": [ 00:15:57.314 { 00:15:57.314 "method": "keyring_file_add_key", 00:15:57.314 "params": { 00:15:57.314 "name": "key0", 00:15:57.314 "path": "/tmp/tmp.hiYTAxHGEA" 00:15:57.314 } 00:15:57.314 } 00:15:57.314 ] 00:15:57.314 }, 00:15:57.314 { 00:15:57.314 "subsystem": "iobuf", 00:15:57.314 "config": [ 00:15:57.314 { 00:15:57.314 "method": "iobuf_set_options", 00:15:57.314 "params": { 00:15:57.314 "small_pool_count": 8192, 00:15:57.314 "large_pool_count": 1024, 00:15:57.314 "small_bufsize": 8192, 00:15:57.314 "large_bufsize": 135168 00:15:57.314 } 00:15:57.314 } 00:15:57.314 ] 00:15:57.314 }, 00:15:57.314 { 00:15:57.314 "subsystem": "sock", 00:15:57.314 "config": [ 00:15:57.314 { 00:15:57.314 "method": "sock_set_default_impl", 00:15:57.314 "params": { 00:15:57.314 "impl_name": "uring" 00:15:57.314 } 00:15:57.314 }, 00:15:57.314 { 00:15:57.314 "method": "sock_impl_set_options", 00:15:57.314 "params": { 00:15:57.314 "impl_name": "ssl", 00:15:57.314 "recv_buf_size": 4096, 00:15:57.314 "send_buf_size": 4096, 00:15:57.314 "enable_recv_pipe": true, 00:15:57.314 "enable_quickack": false, 00:15:57.314 "enable_placement_id": 0, 00:15:57.314 "enable_zerocopy_send_server": true, 00:15:57.314 "enable_zerocopy_send_client": false, 00:15:57.314 "zerocopy_threshold": 0, 00:15:57.314 "tls_version": 0, 00:15:57.314 "enable_ktls": false 00:15:57.314 } 00:15:57.314 }, 00:15:57.314 { 00:15:57.314 "method": "sock_impl_set_options", 00:15:57.314 "params": { 00:15:57.314 "impl_name": "posix", 00:15:57.314 "recv_buf_size": 2097152, 00:15:57.314 "send_buf_size": 2097152, 00:15:57.314 "enable_recv_pipe": true, 00:15:57.314 "enable_quickack": false, 00:15:57.314 "enable_placement_id": 0, 00:15:57.314 "enable_zerocopy_send_server": true, 00:15:57.314 "enable_zerocopy_send_client": false, 00:15:57.314 "zerocopy_threshold": 0, 00:15:57.314 "tls_version": 0, 00:15:57.314 "enable_ktls": false 00:15:57.314 } 00:15:57.314 }, 00:15:57.314 { 00:15:57.314 "method": "sock_impl_set_options", 00:15:57.314 "params": { 00:15:57.314 "impl_name": "uring", 00:15:57.314 "recv_buf_size": 2097152, 00:15:57.314 "send_buf_size": 2097152, 00:15:57.314 "enable_recv_pipe": true, 00:15:57.314 "enable_quickack": false, 00:15:57.314 "enable_placement_id": 0, 00:15:57.314 "enable_zerocopy_send_server": false, 00:15:57.314 "enable_zerocopy_send_client": false, 00:15:57.314 "zerocopy_threshold": 0, 00:15:57.314 "tls_version": 0, 00:15:57.314 "enable_ktls": false 00:15:57.314 } 00:15:57.314 } 00:15:57.314 ] 00:15:57.314 }, 00:15:57.314 { 00:15:57.314 "subsystem": "vmd", 00:15:57.314 "config": [] 00:15:57.314 }, 00:15:57.314 { 00:15:57.314 "subsystem": "accel", 00:15:57.314 "config": [ 00:15:57.314 { 00:15:57.314 "method": "accel_set_options", 00:15:57.314 "params": { 00:15:57.314 "small_cache_size": 128, 00:15:57.314 "large_cache_size": 16, 00:15:57.314 "task_count": 2048, 00:15:57.314 "sequence_count": 2048, 00:15:57.314 "buf_count": 2048 00:15:57.314 } 00:15:57.314 } 00:15:57.314 ] 00:15:57.314 }, 00:15:57.314 { 00:15:57.314 "subsystem": "bdev", 00:15:57.314 "config": [ 00:15:57.314 { 00:15:57.314 "method": "bdev_set_options", 00:15:57.314 "params": { 00:15:57.314 "bdev_io_pool_size": 65535, 00:15:57.314 "bdev_io_cache_size": 256, 00:15:57.314 "bdev_auto_examine": true, 00:15:57.314 "iobuf_small_cache_size": 128, 00:15:57.314 "iobuf_large_cache_size": 16 00:15:57.314 } 00:15:57.314 }, 00:15:57.314 { 00:15:57.314 "method": "bdev_raid_set_options", 00:15:57.314 "params": { 00:15:57.314 "process_window_size_kb": 1024, 00:15:57.314 "process_max_bandwidth_mb_sec": 0 00:15:57.314 } 00:15:57.314 }, 00:15:57.314 { 00:15:57.314 "method": "bdev_iscsi_set_options", 00:15:57.314 "params": { 00:15:57.314 "timeout_sec": 30 00:15:57.314 } 00:15:57.314 }, 00:15:57.314 { 00:15:57.314 "method": "bdev_nvme_set_options", 00:15:57.314 "params": { 00:15:57.314 "action_on_timeout": "none", 00:15:57.314 "timeout_us": 0, 00:15:57.314 "timeout_admin_us": 0, 00:15:57.314 "keep_alive_timeout_ms": 10000, 00:15:57.314 "arbitration_burst": 0, 00:15:57.314 "low_priority_weight": 0, 00:15:57.314 "medium_priority_weight": 0, 00:15:57.314 "high_priority_weight": 0, 00:15:57.314 "nvme_adminq_poll_period_us": 10000, 00:15:57.314 "nvme_ioq_poll_period_us": 0, 00:15:57.314 "io_queue_requests": 0, 00:15:57.314 "delay_cmd_submit": true, 00:15:57.314 "transport_retry_count": 4, 00:15:57.314 "bdev_retry_count": 3, 00:15:57.314 "transport_ack_timeout": 0, 00:15:57.315 "ctrlr_loss_timeout_sec": 0, 00:15:57.315 "reconnect_delay_sec": 0, 00:15:57.315 "fast_io_fail_timeout_sec": 0, 00:15:57.315 "disable_auto_failback": false, 00:15:57.315 "generate_uuids": false, 00:15:57.315 "transport_tos": 0, 00:15:57.315 "nvme_error_stat": false, 00:15:57.315 "rdma_srq_size": 0, 00:15:57.315 "io_path_stat": false, 00:15:57.315 "allow_accel_sequence": false, 00:15:57.315 "rdma_max_cq_size": 0, 00:15:57.315 "rdma_cm_event_timeout_ms": 0, 00:15:57.315 "dhchap_digests": [ 00:15:57.315 "sha256", 00:15:57.315 "sha384", 00:15:57.315 "sha512" 00:15:57.315 ], 00:15:57.315 "dhchap_dhgroups": [ 00:15:57.315 "null", 00:15:57.315 "ffdhe2048", 00:15:57.315 "ffdhe3072", 00:15:57.315 "ffdhe4096", 00:15:57.315 "ffdhe6144", 00:15:57.315 "ffdhe8192" 00:15:57.315 ] 00:15:57.315 } 00:15:57.315 }, 00:15:57.315 { 00:15:57.315 "method": "bdev_nvme_set_hotplug", 00:15:57.315 "params": { 00:15:57.315 "period_us": 100000, 00:15:57.315 "enable": false 00:15:57.315 } 00:15:57.315 }, 00:15:57.315 { 00:15:57.315 "method": "bdev_malloc_create", 00:15:57.315 "params": { 00:15:57.315 "name": "malloc0", 00:15:57.315 "num_blocks": 8192, 00:15:57.315 "block_size": 4096, 00:15:57.315 "physical_block_size": 4096, 00:15:57.315 "uuid": "6cc26876-166c-40e9-a564-4d3b3ae1ddb7", 00:15:57.315 "optimal_io_boundary": 0, 00:15:57.315 "md_size": 0, 00:15:57.315 "dif_type": 0, 00:15:57.315 "dif_is_head_of_md": false, 00:15:57.315 "dif_pi_format": 0 00:15:57.315 } 00:15:57.315 }, 00:15:57.315 { 00:15:57.315 "method": "bdev_wait_for_examine" 00:15:57.315 } 00:15:57.315 ] 00:15:57.315 }, 00:15:57.315 { 00:15:57.315 "subsystem": "nbd", 00:15:57.315 "config": [] 00:15:57.315 }, 00:15:57.315 { 00:15:57.315 "subsystem": "scheduler", 00:15:57.315 "config": [ 00:15:57.315 { 00:15:57.315 "method": "framework_set_scheduler", 00:15:57.315 "params": { 00:15:57.315 "name": "static" 00:15:57.315 } 00:15:57.315 } 00:15:57.315 ] 00:15:57.315 }, 00:15:57.315 { 00:15:57.315 "subsystem": "nvmf", 00:15:57.315 "config": [ 00:15:57.315 { 00:15:57.315 "method": "nvmf_set_config", 00:15:57.315 "params": { 00:15:57.315 "discovery_filter": "match_any", 00:15:57.315 "admin_cmd_passthru": { 00:15:57.315 "identify_ctrlr": false 00:15:57.315 }, 00:15:57.315 "dhchap_digests": [ 00:15:57.315 "sha256", 00:15:57.315 "sha384", 00:15:57.315 "sha512" 00:15:57.315 ], 00:15:57.315 "dhchap_dhgroups": [ 00:15:57.315 "null", 00:15:57.315 "ffdhe2048", 00:15:57.315 "ffdhe3072", 00:15:57.315 "ffdhe4096", 00:15:57.315 "ffdhe6144", 00:15:57.315 "ffdhe8192" 00:15:57.315 ] 00:15:57.315 } 00:15:57.315 }, 00:15:57.315 { 00:15:57.315 "method": "nvmf_set_max_subsystems", 00:15:57.315 "params": { 00:15:57.315 "max_subsystems": 1024 00:15:57.315 } 00:15:57.315 }, 00:15:57.315 { 00:15:57.315 "method": "nvmf_set_crdt", 00:15:57.315 "params": { 00:15:57.315 "crdt1": 0, 00:15:57.315 "crdt2": 0, 00:15:57.315 "crdt3": 0 00:15:57.315 } 00:15:57.315 }, 00:15:57.315 { 00:15:57.315 "method": "nvmf_create_transport", 00:15:57.315 "params": { 00:15:57.315 "trtype": "TCP", 00:15:57.315 "max_queue_depth": 128, 00:15:57.315 "max_io_qpairs_per_ctrlr": 127, 00:15:57.315 "in_capsule_data_size": 4096, 00:15:57.315 "max_io_size": 131072, 00:15:57.315 "io_unit_size": 131072, 00:15:57.315 "max_aq_depth": 128, 00:15:57.315 "num_shared_buffers": 511, 00:15:57.315 "buf_cache_size": 4294967295, 00:15:57.315 "dif_insert_or_strip": false, 00:15:57.315 "zcopy": false, 00:15:57.315 "c2h_success": false, 00:15:57.315 "sock_priority": 0, 00:15:57.315 "abort_timeout_sec": 1, 00:15:57.315 "ack_timeout": 0, 00:15:57.315 "data_wr_pool_size": 0 00:15:57.315 } 00:15:57.315 }, 00:15:57.315 { 00:15:57.315 "method": "nvmf_create_subsystem", 00:15:57.315 "params": { 00:15:57.315 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.315 "allow_any_host": false, 00:15:57.315 "serial_number": "00000000000000000000", 00:15:57.315 "model_number": "SPDK bdev Controller", 00:15:57.315 "max_namespaces": 32, 00:15:57.315 "min_cntlid": 1, 00:15:57.315 "max_cntlid": 65519, 00:15:57.315 "ana_reporting": false 00:15:57.315 } 00:15:57.315 }, 00:15:57.315 { 00:15:57.315 "method": "nvmf_subsystem_add_host", 00:15:57.315 "params": { 00:15:57.315 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.315 "host": "nqn.2016-06.io.spdk:host1", 00:15:57.315 "psk": "key0" 00:15:57.315 } 00:15:57.315 }, 00:15:57.315 { 00:15:57.315 "method": "nvmf_subsystem_add_ns", 00:15:57.315 "params": { 00:15:57.315 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.315 "namespace": { 00:15:57.315 "nsid": 1, 00:15:57.315 "bdev_name": "malloc0", 00:15:57.315 "nguid": "6CC26876166C40E9A5644D3B3AE1DDB7", 00:15:57.315 "uuid": "6cc26876-166c-40e9-a564-4d3b3ae1ddb7", 00:15:57.315 "no_auto_visible": false 00:15:57.315 } 00:15:57.315 } 00:15:57.315 }, 00:15:57.315 { 00:15:57.315 "method": "nvmf_subsystem_add_listener", 00:15:57.315 "params": { 00:15:57.315 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.315 "listen_address": { 00:15:57.315 "trtype": "TCP", 00:15:57.315 "adrfam": "IPv4", 00:15:57.315 "traddr": "10.0.0.3", 00:15:57.315 "trsvcid": "4420" 00:15:57.315 }, 00:15:57.315 "secure_channel": false, 00:15:57.315 "sock_impl": "ssl" 00:15:57.315 } 00:15:57.315 } 00:15:57.315 ] 00:15:57.315 } 00:15:57.315 ] 00:15:57.315 }' 00:15:57.315 02:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:57.575 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:57.575 "subsystems": [ 00:15:57.575 { 00:15:57.575 "subsystem": "keyring", 00:15:57.575 "config": [ 00:15:57.575 { 00:15:57.575 "method": "keyring_file_add_key", 00:15:57.575 "params": { 00:15:57.575 "name": "key0", 00:15:57.575 "path": "/tmp/tmp.hiYTAxHGEA" 00:15:57.575 } 00:15:57.575 } 00:15:57.575 ] 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "subsystem": "iobuf", 00:15:57.575 "config": [ 00:15:57.575 { 00:15:57.575 "method": "iobuf_set_options", 00:15:57.575 "params": { 00:15:57.575 "small_pool_count": 8192, 00:15:57.575 "large_pool_count": 1024, 00:15:57.575 "small_bufsize": 8192, 00:15:57.575 "large_bufsize": 135168 00:15:57.575 } 00:15:57.575 } 00:15:57.575 ] 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "subsystem": "sock", 00:15:57.575 "config": [ 00:15:57.575 { 00:15:57.575 "method": "sock_set_default_impl", 00:15:57.575 "params": { 00:15:57.575 "impl_name": "uring" 00:15:57.575 } 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "method": "sock_impl_set_options", 00:15:57.575 "params": { 00:15:57.575 "impl_name": "ssl", 00:15:57.575 "recv_buf_size": 4096, 00:15:57.575 "send_buf_size": 4096, 00:15:57.575 "enable_recv_pipe": true, 00:15:57.575 "enable_quickack": false, 00:15:57.575 "enable_placement_id": 0, 00:15:57.575 "enable_zerocopy_send_server": true, 00:15:57.575 "enable_zerocopy_send_client": false, 00:15:57.575 "zerocopy_threshold": 0, 00:15:57.575 "tls_version": 0, 00:15:57.575 "enable_ktls": false 00:15:57.575 } 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "method": "sock_impl_set_options", 00:15:57.575 "params": { 00:15:57.575 "impl_name": "posix", 00:15:57.575 "recv_buf_size": 2097152, 00:15:57.575 "send_buf_size": 2097152, 00:15:57.575 "enable_recv_pipe": true, 00:15:57.575 "enable_quickack": false, 00:15:57.575 "enable_placement_id": 0, 00:15:57.575 "enable_zerocopy_send_server": true, 00:15:57.575 "enable_zerocopy_send_client": false, 00:15:57.575 "zerocopy_threshold": 0, 00:15:57.575 "tls_version": 0, 00:15:57.575 "enable_ktls": false 00:15:57.575 } 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "method": "sock_impl_set_options", 00:15:57.575 "params": { 00:15:57.575 "impl_name": "uring", 00:15:57.575 "recv_buf_size": 2097152, 00:15:57.575 "send_buf_size": 2097152, 00:15:57.575 "enable_recv_pipe": true, 00:15:57.575 "enable_quickack": false, 00:15:57.575 "enable_placement_id": 0, 00:15:57.575 "enable_zerocopy_send_server": false, 00:15:57.575 "enable_zerocopy_send_client": false, 00:15:57.575 "zerocopy_threshold": 0, 00:15:57.575 "tls_version": 0, 00:15:57.575 "enable_ktls": false 00:15:57.575 } 00:15:57.575 } 00:15:57.575 ] 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "subsystem": "vmd", 00:15:57.575 "config": [] 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "subsystem": "accel", 00:15:57.575 "config": [ 00:15:57.575 { 00:15:57.575 "method": "accel_set_options", 00:15:57.575 "params": { 00:15:57.575 "small_cache_size": 128, 00:15:57.575 "large_cache_size": 16, 00:15:57.575 "task_count": 2048, 00:15:57.575 "sequence_count": 2048, 00:15:57.575 "buf_count": 2048 00:15:57.575 } 00:15:57.575 } 00:15:57.575 ] 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "subsystem": "bdev", 00:15:57.575 "config": [ 00:15:57.575 { 00:15:57.575 "method": "bdev_set_options", 00:15:57.575 "params": { 00:15:57.575 "bdev_io_pool_size": 65535, 00:15:57.575 "bdev_io_cache_size": 256, 00:15:57.575 "bdev_auto_examine": true, 00:15:57.575 "iobuf_small_cache_size": 128, 00:15:57.575 "iobuf_large_cache_size": 16 00:15:57.575 } 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "method": "bdev_raid_set_options", 00:15:57.575 "params": { 00:15:57.575 "process_window_size_kb": 1024, 00:15:57.575 "process_max_bandwidth_mb_sec": 0 00:15:57.575 } 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "method": "bdev_iscsi_set_options", 00:15:57.575 "params": { 00:15:57.575 "timeout_sec": 30 00:15:57.575 } 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "method": "bdev_nvme_set_options", 00:15:57.575 "params": { 00:15:57.575 "action_on_timeout": "none", 00:15:57.575 "timeout_us": 0, 00:15:57.575 "timeout_admin_us": 0, 00:15:57.575 "keep_alive_timeout_ms": 10000, 00:15:57.575 "arbitration_burst": 0, 00:15:57.575 "low_priority_weight": 0, 00:15:57.575 "medium_priority_weight": 0, 00:15:57.575 "high_priority_weight": 0, 00:15:57.575 "nvme_adminq_poll_period_us": 10000, 00:15:57.575 "nvme_ioq_poll_period_us": 0, 00:15:57.575 "io_queue_requests": 512, 00:15:57.575 "delay_cmd_submit": true, 00:15:57.575 "transport_retry_count": 4, 00:15:57.575 "bdev_retry_count": 3, 00:15:57.575 "transport_ack_timeout": 0, 00:15:57.575 "ctrlr_loss_timeout_sec": 0, 00:15:57.575 "reconnect_delay_sec": 0, 00:15:57.575 "fast_io_fail_timeout_sec": 0, 00:15:57.575 "disable_auto_failback": false, 00:15:57.575 "generate_uuids": false, 00:15:57.575 "transport_tos": 0, 00:15:57.575 "nvme_error_stat": false, 00:15:57.575 "rdma_srq_size": 0, 00:15:57.575 "io_path_stat": false, 00:15:57.575 "allow_accel_sequence": false, 00:15:57.575 "rdma_max_cq_size": 0, 00:15:57.575 "rdma_cm_event_timeout_ms": 0, 00:15:57.575 "dhchap_digests": [ 00:15:57.575 "sha256", 00:15:57.575 "sha384", 00:15:57.575 "sha512" 00:15:57.575 ], 00:15:57.575 "dhchap_dhgroups": [ 00:15:57.575 "null", 00:15:57.575 "ffdhe2048", 00:15:57.575 "ffdhe3072", 00:15:57.575 "ffdhe4096", 00:15:57.575 "ffdhe6144", 00:15:57.575 "ffdhe8192" 00:15:57.575 ] 00:15:57.575 } 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "method": "bdev_nvme_attach_controller", 00:15:57.575 "params": { 00:15:57.575 "name": "nvme0", 00:15:57.575 "trtype": "TCP", 00:15:57.575 "adrfam": "IPv4", 00:15:57.575 "traddr": "10.0.0.3", 00:15:57.575 "trsvcid": "4420", 00:15:57.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.575 "prchk_reftag": false, 00:15:57.575 "prchk_guard": false, 00:15:57.575 "ctrlr_loss_timeout_sec": 0, 00:15:57.575 "reconnect_delay_sec": 0, 00:15:57.575 "fast_io_fail_timeout_sec": 0, 00:15:57.575 "psk": "key0", 00:15:57.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:57.575 "hdgst": false, 00:15:57.575 "ddgst": false 00:15:57.575 } 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "method": "bdev_nvme_set_hotplug", 00:15:57.575 "params": { 00:15:57.575 "period_us": 100000, 00:15:57.575 "enable": false 00:15:57.575 } 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "method": "bdev_enable_histogram", 00:15:57.575 "params": { 00:15:57.575 "name": "nvme0n1", 00:15:57.575 "enable": true 00:15:57.575 } 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "method": "bdev_wait_for_examine" 00:15:57.576 } 00:15:57.576 ] 00:15:57.576 }, 00:15:57.576 { 00:15:57.576 "subsystem": "nbd", 00:15:57.576 "config": [] 00:15:57.576 } 00:15:57.576 ] 00:15:57.576 }' 00:15:57.576 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 85138 00:15:57.576 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85138 ']' 00:15:57.576 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85138 00:15:57.576 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:57.576 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.576 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85138 00:15:57.576 killing process with pid 85138 00:15:57.576 Received shutdown signal, test time was about 1.000000 seconds 00:15:57.576 00:15:57.576 Latency(us) 00:15:57.576 [2024-11-08T02:19:59.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.576 [2024-11-08T02:19:59.460Z] =================================================================================================================== 00:15:57.576 [2024-11-08T02:19:59.460Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:57.576 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:57.576 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:57.576 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85138' 00:15:57.576 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85138 00:15:57.576 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85138 00:15:57.835 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 85106 00:15:57.835 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85106 ']' 00:15:57.835 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85106 00:15:57.835 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:57.835 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.835 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85106 00:15:57.835 killing process with pid 85106 00:15:57.835 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.835 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.835 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85106' 00:15:57.835 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85106 00:15:57.835 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85106 00:15:58.095 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:58.095 "subsystems": [ 00:15:58.095 { 00:15:58.095 "subsystem": "keyring", 00:15:58.095 "config": [ 00:15:58.095 { 00:15:58.095 "method": "keyring_file_add_key", 00:15:58.095 "params": { 00:15:58.095 "name": "key0", 00:15:58.095 "path": "/tmp/tmp.hiYTAxHGEA" 00:15:58.095 } 00:15:58.095 } 00:15:58.095 ] 00:15:58.095 }, 00:15:58.095 { 00:15:58.095 "subsystem": "iobuf", 00:15:58.095 "config": [ 00:15:58.095 { 00:15:58.095 "method": "iobuf_set_options", 00:15:58.095 "params": { 00:15:58.095 "small_pool_count": 8192, 00:15:58.095 "large_pool_count": 1024, 00:15:58.095 "small_bufsize": 8192, 00:15:58.095 "large_bufsize": 135168 00:15:58.095 } 00:15:58.095 } 00:15:58.095 ] 00:15:58.095 }, 00:15:58.095 { 00:15:58.095 "subsystem": "sock", 00:15:58.095 "config": [ 00:15:58.095 { 00:15:58.095 "method": "sock_set_default_impl", 00:15:58.095 "params": { 00:15:58.095 "impl_name": "uring" 00:15:58.095 } 00:15:58.095 }, 00:15:58.095 { 00:15:58.095 "method": "sock_impl_set_options", 00:15:58.095 "params": { 00:15:58.095 "impl_name": "ssl", 00:15:58.095 "recv_buf_size": 4096, 00:15:58.095 "send_buf_size": 4096, 00:15:58.095 "enable_recv_pipe": true, 00:15:58.095 "enable_quickack": false, 00:15:58.095 "enable_placement_id": 0, 00:15:58.095 "enable_zerocopy_send_server": true, 00:15:58.095 "enable_zerocopy_send_client": false, 00:15:58.095 "zerocopy_threshold": 0, 00:15:58.095 "tls_version": 0, 00:15:58.095 "enable_ktls": false 00:15:58.095 } 00:15:58.095 }, 00:15:58.095 { 00:15:58.095 "method": "sock_impl_set_options", 00:15:58.095 "params": { 00:15:58.095 "impl_name": "posix", 00:15:58.095 "recv_buf_size": 2097152, 00:15:58.095 "send_buf_size": 2097152, 00:15:58.095 "enable_recv_pipe": true, 00:15:58.095 "enable_quickack": false, 00:15:58.095 "enable_placement_id": 0, 00:15:58.095 "enable_zerocopy_send_server": true, 00:15:58.095 "enable_zerocopy_send_client": false, 00:15:58.095 "zerocopy_threshold": 0, 00:15:58.095 "tls_version": 0, 00:15:58.095 "enable_ktls": false 00:15:58.095 } 00:15:58.095 }, 00:15:58.095 { 00:15:58.095 "method": "sock_impl_set_options", 00:15:58.095 "params": { 00:15:58.095 "impl_name": "uring", 00:15:58.095 "recv_buf_size": 2097152, 00:15:58.095 "send_buf_size": 2097152, 00:15:58.095 "enable_recv_pipe": true, 00:15:58.095 "enable_quickack": false, 00:15:58.095 "enable_placement_id": 0, 00:15:58.095 "enable_zerocopy_send_server": false, 00:15:58.095 "enable_zerocopy_send_client": false, 00:15:58.095 "zerocopy_threshold": 0, 00:15:58.095 "tls_version": 0, 00:15:58.095 "enable_ktls": false 00:15:58.095 } 00:15:58.095 } 00:15:58.095 ] 00:15:58.095 }, 00:15:58.095 { 00:15:58.095 "subsystem": "vmd", 00:15:58.095 "config": [] 00:15:58.095 }, 00:15:58.095 { 00:15:58.095 "subsystem": "accel", 00:15:58.095 "config": [ 00:15:58.095 { 00:15:58.095 "method": "accel_set_options", 00:15:58.095 "params": { 00:15:58.095 "small_cache_size": 128, 00:15:58.095 "large_cache_size": 16, 00:15:58.095 "task_count": 2048, 00:15:58.095 "sequence_count": 2048, 00:15:58.095 "buf_count": 2048 00:15:58.095 } 00:15:58.095 } 00:15:58.095 ] 00:15:58.095 }, 00:15:58.095 { 00:15:58.095 "subsystem": "bdev", 00:15:58.095 "config": [ 00:15:58.095 { 00:15:58.095 "method": "bdev_set_options", 00:15:58.095 "params": { 00:15:58.095 "bdev_io_pool_size": 65535, 00:15:58.095 "bdev_io_cache_size": 256, 00:15:58.095 "bdev_auto_examine": true, 00:15:58.095 "iobuf_small_cache_size": 128, 00:15:58.095 "iobuf_large_cache_size": 16 00:15:58.095 } 00:15:58.095 }, 00:15:58.095 { 00:15:58.095 "method": "bdev_raid_set_options", 00:15:58.095 "params": { 00:15:58.095 "process_window_size_kb": 1024, 00:15:58.095 "process_max_bandwidth_mb_sec": 0 00:15:58.095 } 00:15:58.095 }, 00:15:58.095 { 00:15:58.095 "method": "bdev_iscsi_set_options", 00:15:58.095 "params": { 00:15:58.095 "timeout_sec": 30 00:15:58.095 } 00:15:58.095 }, 00:15:58.095 { 00:15:58.095 "method": "bdev_nvme_set_options", 00:15:58.095 "params": { 00:15:58.095 "action_on_timeout": "none", 00:15:58.095 "timeout_us": 0, 00:15:58.095 "timeout_admin_us": 0, 00:15:58.095 "keep_alive_timeout_ms": 10000, 00:15:58.095 "arbitration_burst": 0, 00:15:58.095 "low_priority_weight": 0, 00:15:58.095 "medium_priority_weight": 0, 00:15:58.095 "high_priority_weight": 0, 00:15:58.095 "nvme_adminq_poll_period_us": 10000, 00:15:58.095 "nvme_ioq_poll_period_us": 0, 00:15:58.095 "io_queue_requests": 0, 00:15:58.095 "delay_cmd_submit": true, 00:15:58.095 "transport_retry_count": 4, 00:15:58.095 "bdev_retry_count": 3, 00:15:58.095 "transport_ack_timeout": 0, 00:15:58.095 "ctrlr_loss_timeout_sec": 0, 00:15:58.095 "reconnect_delay_sec": 0, 00:15:58.095 "fast_io_fail_timeout_sec": 0, 00:15:58.095 "disable_auto_failback": false, 00:15:58.095 "generate_uuids": false, 00:15:58.095 "transport_tos": 0, 00:15:58.095 "nvme_error_stat": false, 00:15:58.095 "rdma_srq_size": 0, 00:15:58.095 "io_path_stat": false, 00:15:58.095 "allow_accel_sequence": false, 00:15:58.095 "rdma_max_cq_size": 0, 00:15:58.095 "rdma_cm_event_timeout_ms": 0, 00:15:58.095 "dhchap_digests": [ 00:15:58.095 "sha256", 00:15:58.095 "sha384", 00:15:58.095 "sha512" 00:15:58.095 ], 00:15:58.095 "dhchap_dhgroups": [ 00:15:58.095 "null", 00:15:58.095 "ffdhe2048", 00:15:58.095 "ffdhe3072", 00:15:58.095 "ffdhe4096", 00:15:58.095 "ffdhe6144", 00:15:58.095 "ffdhe8192" 00:15:58.095 ] 00:15:58.095 } 00:15:58.095 }, 00:15:58.095 { 00:15:58.095 "method": "bdev_nvme_set_hotplug", 00:15:58.095 "params": { 00:15:58.095 "period_us": 100000, 00:15:58.095 "enable": false 00:15:58.095 } 00:15:58.095 }, 00:15:58.095 { 00:15:58.095 "method": "bdev_malloc_create", 00:15:58.095 "params": { 00:15:58.095 "name": "malloc0", 00:15:58.095 "num_blocks": 8192, 00:15:58.095 "block_size": 4096, 00:15:58.095 "physical_block_size": 4096, 00:15:58.095 "uuid": "6cc26876-166c-40e9-a564-4d3b3ae1ddb7", 00:15:58.095 "optimal_io_boundary": 0, 00:15:58.095 "md_size": 0, 00:15:58.095 "dif_type": 0, 00:15:58.095 "dif_is_head_of_md": false, 00:15:58.095 "dif_pi_format": 0 00:15:58.095 } 00:15:58.095 }, 00:15:58.095 { 00:15:58.096 "method": "bdev_wait_for_examine" 00:15:58.096 } 00:15:58.096 ] 00:15:58.096 }, 00:15:58.096 { 00:15:58.096 "subsystem": "nbd", 00:15:58.096 "config": [] 00:15:58.096 }, 00:15:58.096 { 00:15:58.096 "subsystem": "scheduler", 00:15:58.096 "config": [ 00:15:58.096 { 00:15:58.096 "method": "framework_set_scheduler", 00:15:58.096 "params": { 00:15:58.096 "name": "static" 00:15:58.096 } 00:15:58.096 } 00:15:58.096 ] 00:15:58.096 }, 00:15:58.096 { 00:15:58.096 "subsystem": "nvmf", 00:15:58.096 "config": [ 00:15:58.096 { 00:15:58.096 "method": "nvmf_set_config", 00:15:58.096 "params": { 00:15:58.096 "discovery_filter": "match_any", 00:15:58.096 "admin_cmd_passthru": { 00:15:58.096 "identify_ctrlr": false 00:15:58.096 }, 00:15:58.096 "dhchap_digests": [ 00:15:58.096 "sha256", 00:15:58.096 "sha384", 00:15:58.096 "sha512" 00:15:58.096 ], 00:15:58.096 "dhchap_dhgroups": [ 00:15:58.096 "null", 00:15:58.096 "ffdhe2048", 00:15:58.096 "ffdhe3072", 00:15:58.096 "ffdhe4096", 00:15:58.096 "ffdhe6144", 00:15:58.096 "ffdhe8192" 00:15:58.096 ] 00:15:58.096 } 00:15:58.096 }, 00:15:58.096 { 00:15:58.096 "method": "nvmf_set_max_subsystems", 00:15:58.096 "params": { 00:15:58.096 "max_ 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:58.096 subsystems": 1024 00:15:58.096 } 00:15:58.096 }, 00:15:58.096 { 00:15:58.096 "method": "nvmf_set_crdt", 00:15:58.096 "params": { 00:15:58.096 "crdt1": 0, 00:15:58.096 "crdt2": 0, 00:15:58.096 "crdt3": 0 00:15:58.096 } 00:15:58.096 }, 00:15:58.096 { 00:15:58.096 "method": "nvmf_create_transport", 00:15:58.096 "params": { 00:15:58.096 "trtype": "TCP", 00:15:58.096 "max_queue_depth": 128, 00:15:58.096 "max_io_qpairs_per_ctrlr": 127, 00:15:58.096 "in_capsule_data_size": 4096, 00:15:58.096 "max_io_size": 131072, 00:15:58.096 "io_unit_size": 131072, 00:15:58.096 "max_aq_depth": 128, 00:15:58.096 "num_shared_buffers": 511, 00:15:58.096 "buf_cache_size": 4294967295, 00:15:58.096 "dif_insert_or_strip": false, 00:15:58.096 "zcopy": false, 00:15:58.096 "c2h_success": false, 00:15:58.096 "sock_priority": 0, 00:15:58.096 "abort_timeout_sec": 1, 00:15:58.096 "ack_timeout": 0, 00:15:58.096 "data_wr_pool_size": 0 00:15:58.096 } 00:15:58.096 }, 00:15:58.096 { 00:15:58.096 "method": "nvmf_create_subsystem", 00:15:58.096 "params": { 00:15:58.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:58.096 "allow_any_host": false, 00:15:58.096 "serial_number": "00000000000000000000", 00:15:58.096 "model_number": "SPDK bdev Controller", 00:15:58.096 "max_namespaces": 32, 00:15:58.096 "min_cntlid": 1, 00:15:58.096 "max_cntlid": 65519, 00:15:58.096 "ana_reporting": false 00:15:58.096 } 00:15:58.096 }, 00:15:58.096 { 00:15:58.096 "method": "nvmf_subsystem_add_host", 00:15:58.096 "params": { 00:15:58.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:58.096 "host": "nqn.2016-06.io.spdk:host1", 00:15:58.096 "psk": "key0" 00:15:58.096 } 00:15:58.096 }, 00:15:58.096 { 00:15:58.096 "method": "nvmf_subsystem_add_ns", 00:15:58.096 "params": { 00:15:58.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:58.096 "namespace": { 00:15:58.096 "nsid": 1, 00:15:58.096 "bdev_name": "malloc0", 00:15:58.096 "nguid": "6CC26876166C40E9A5644D3B3AE1DDB7", 00:15:58.096 "uuid": "6cc26876-166c-40e9-a564-4d3b3ae1ddb7", 00:15:58.096 "no_auto_visible": false 00:15:58.096 } 00:15:58.096 } 00:15:58.096 }, 00:15:58.096 { 00:15:58.096 "method": "nvmf_subsystem_add_listener", 00:15:58.096 "params": { 00:15:58.096 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:58.096 "listen_address": { 00:15:58.096 "trtype": "TCP", 00:15:58.096 "adrfam": "IPv4", 00:15:58.096 "traddr": "10.0.0.3", 00:15:58.096 "trsvcid": "4420" 00:15:58.096 }, 00:15:58.096 "secure_channel": false, 00:15:58.096 "sock_impl": "ssl" 00:15:58.096 } 00:15:58.096 } 00:15:58.096 ] 00:15:58.096 } 00:15:58.096 ] 00:15:58.096 }' 00:15:58.096 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:58.096 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:58.096 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.096 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=85191 00:15:58.096 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:58.096 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 85191 00:15:58.096 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85191 ']' 00:15:58.096 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.096 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.096 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.096 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.096 02:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.096 [2024-11-08 02:19:59.795782] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:58.096 [2024-11-08 02:19:59.796169] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.096 [2024-11-08 02:19:59.937576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.096 [2024-11-08 02:19:59.973348] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.096 [2024-11-08 02:19:59.973414] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.096 [2024-11-08 02:19:59.973424] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.096 [2024-11-08 02:19:59.973446] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.096 [2024-11-08 02:19:59.973453] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.096 [2024-11-08 02:19:59.973516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.355 [2024-11-08 02:20:00.116620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:58.355 [2024-11-08 02:20:00.171328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.355 [2024-11-08 02:20:00.209970] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:58.355 [2024-11-08 02:20:00.210204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=85223 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 85223 /var/tmp/bdevperf.sock 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85223 ']' 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:59.292 02:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:59.292 "subsystems": [ 00:15:59.292 { 00:15:59.292 "subsystem": "keyring", 00:15:59.292 "config": [ 00:15:59.292 { 00:15:59.292 "method": "keyring_file_add_key", 00:15:59.292 "params": { 00:15:59.292 "name": "key0", 00:15:59.292 "path": "/tmp/tmp.hiYTAxHGEA" 00:15:59.292 } 00:15:59.292 } 00:15:59.292 ] 00:15:59.292 }, 00:15:59.292 { 00:15:59.292 "subsystem": "iobuf", 00:15:59.292 "config": [ 00:15:59.292 { 00:15:59.292 "method": "iobuf_set_options", 00:15:59.292 "params": { 00:15:59.292 "small_pool_count": 8192, 00:15:59.292 "large_pool_count": 1024, 00:15:59.292 "small_bufsize": 8192, 00:15:59.292 "large_bufsize": 135168 00:15:59.292 } 00:15:59.292 } 00:15:59.292 ] 00:15:59.292 }, 00:15:59.292 { 00:15:59.292 "subsystem": "sock", 00:15:59.292 "config": [ 00:15:59.292 { 00:15:59.292 "method": "sock_set_default_impl", 00:15:59.292 "params": { 00:15:59.292 "impl_name": "uring" 00:15:59.292 } 00:15:59.292 }, 00:15:59.292 { 00:15:59.292 "method": "sock_impl_set_options", 00:15:59.292 "params": { 00:15:59.292 "impl_name": "ssl", 00:15:59.292 "recv_buf_size": 4096, 00:15:59.292 "send_buf_size": 4096, 00:15:59.292 "enable_recv_pipe": true, 00:15:59.292 "enable_quickack": false, 00:15:59.292 "enable_placement_id": 0, 00:15:59.292 "enable_zerocopy_send_server": true, 00:15:59.292 "enable_zerocopy_send_client": false, 00:15:59.292 "zerocopy_threshold": 0, 00:15:59.292 "tls_version": 0, 00:15:59.292 "enable_ktls": false 00:15:59.292 } 00:15:59.292 }, 00:15:59.292 { 00:15:59.292 "method": "sock_impl_set_options", 00:15:59.292 "params": { 00:15:59.292 "impl_name": "posix", 00:15:59.292 "recv_buf_size": 2097152, 00:15:59.292 "send_buf_size": 2097152, 00:15:59.292 "enable_recv_pipe": true, 00:15:59.292 "enable_quickack": false, 00:15:59.292 "enable_placement_id": 0, 00:15:59.292 "enable_zerocopy_send_server": true, 00:15:59.292 "enable_zerocopy_send_client": false, 00:15:59.292 "zerocopy_threshold": 0, 00:15:59.292 "tls_version": 0, 00:15:59.292 "enable_ktls": false 00:15:59.292 } 00:15:59.292 }, 00:15:59.292 { 00:15:59.292 "method": "sock_impl_set_options", 00:15:59.292 "params": { 00:15:59.292 "impl_name": "uring", 00:15:59.292 "recv_buf_size": 2097152, 00:15:59.292 "send_buf_size": 2097152, 00:15:59.292 "enable_recv_pipe": true, 00:15:59.292 "enable_quickack": false, 00:15:59.292 "enable_placement_id": 0, 00:15:59.292 "enable_zerocopy_send_server": false, 00:15:59.292 "enable_zerocopy_send_client": false, 00:15:59.292 "zerocopy_threshold": 0, 00:15:59.292 "tls_version": 0, 00:15:59.292 "enable_ktls": false 00:15:59.292 } 00:15:59.292 } 00:15:59.292 ] 00:15:59.292 }, 00:15:59.292 { 00:15:59.292 "subsystem": "vmd", 00:15:59.292 "config": [] 00:15:59.292 }, 00:15:59.292 { 00:15:59.292 "subsystem": "accel", 00:15:59.292 "config": [ 00:15:59.292 { 00:15:59.292 "method": "accel_set_options", 00:15:59.292 "params": { 00:15:59.292 "small_cache_size": 128, 00:15:59.292 "large_cache_size": 16, 00:15:59.292 "task_count": 2048, 00:15:59.292 "sequence_count": 2048, 00:15:59.292 "buf_count": 2048 00:15:59.292 } 00:15:59.292 } 00:15:59.293 ] 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "subsystem": "bdev", 00:15:59.293 "config": [ 00:15:59.293 { 00:15:59.293 "method": "bdev_set_options", 00:15:59.293 "params": { 00:15:59.293 "bdev_io_pool_size": 65535, 00:15:59.293 "bdev_io_cache_size": 256, 00:15:59.293 "bdev_auto_examine": true, 00:15:59.293 "iobuf_small_cache_size": 128, 00:15:59.293 "iobuf_large_cache_size": 16 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "bdev_raid_set_options", 00:15:59.293 "params": { 00:15:59.293 "process_window_size_kb": 1024, 00:15:59.293 "process_max_bandwidth_mb_sec": 0 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "bdev_iscsi_set_options", 00:15:59.293 "params": { 00:15:59.293 "timeout_sec": 30 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "bdev_nvme_set_options", 00:15:59.293 "params": { 00:15:59.293 "action_on_timeout": "none", 00:15:59.293 "timeout_us": 0, 00:15:59.293 "timeout_admin_us": 0, 00:15:59.293 "keep_alive_timeout_ms": 10000, 00:15:59.293 "arbitration_burst": 0, 00:15:59.293 "low_priority_weight": 0, 00:15:59.293 "medium_priority_weight": 0, 00:15:59.293 "high_priority_weight": 0, 00:15:59.293 "nvme_adminq_poll_period_us": 10000, 00:15:59.293 "nvme_ioq_poll_period_us": 0, 00:15:59.293 "io_queue_requests": 512, 00:15:59.293 "delay_cmd_submit": true, 00:15:59.293 "transport_retry_count": 4, 00:15:59.293 "bdev_retry_count": 3, 00:15:59.293 "transport_ack_timeout": 0, 00:15:59.293 "ctrlr_loss_timeout_sec": 0, 00:15:59.293 "reconnect_delay_sec": 0, 00:15:59.293 "fast_io_fail_timeout_sec": 0, 00:15:59.293 "disable_auto_failback": false, 00:15:59.293 "generate_uuids": false, 00:15:59.293 "transport_tos": 0, 00:15:59.293 "nvme_error_stat": false, 00:15:59.293 "rdma_srq_size": 0, 00:15:59.293 "io_path_stat": false, 00:15:59.293 "allow_accel_sequence": false, 00:15:59.293 "rdma_max_cq_size": 0, 00:15:59.293 "rdma_cm_event_timeout_ms": 0, 00:15:59.293 "dhchap_digests": [ 00:15:59.293 "sha256", 00:15:59.293 "sha384", 00:15:59.293 "sha512" 00:15:59.293 ], 00:15:59.293 "dhchap_dhgroups": [ 00:15:59.293 "null", 00:15:59.293 "ffdhe2048", 00:15:59.293 "ffdhe3072", 00:15:59.293 "ffdhe4096", 00:15:59.293 "ffdhe6144", 00:15:59.293 "ffdhe8192" 00:15:59.293 ] 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "bdev_nvme_attach_controller", 00:15:59.293 "params": { 00:15:59.293 "name": "nvme0", 00:15:59.293 "trtype": "TCP", 00:15:59.293 "adrfam": "IPv4", 00:15:59.293 "traddr": "10.0.0.3", 00:15:59.293 "trsvcid": "4420", 00:15:59.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.293 "prchk_reftag": false, 00:15:59.293 "prchk_guard": false, 00:15:59.293 "ctrlr_loss_timeout_sec": 0, 00:15:59.293 "reconnect_delay_sec": 0, 00:15:59.293 "fast_io_fail_timeout_sec": 0, 00:15:59.293 "psk": "key0", 00:15:59.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:59.293 "hdgst": false, 00:15:59.293 "ddgst": false 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "bdev_nvme_set_hotplug", 00:15:59.293 "params": { 00:15:59.293 "period_us": 100000, 00:15:59.293 "enable": false 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "bdev_enable_histogram", 00:15:59.293 "params": { 00:15:59.293 "name": "nvme0n1", 00:15:59.293 "enable": true 00:15:59.293 } 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "method": "bdev_wait_for_examine" 00:15:59.293 } 00:15:59.293 ] 00:15:59.293 }, 00:15:59.293 { 00:15:59.293 "subsystem": "nbd", 00:15:59.293 "config": [] 00:15:59.293 } 00:15:59.293 ] 00:15:59.293 }' 00:15:59.293 [2024-11-08 02:20:00.950275] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:59.293 [2024-11-08 02:20:00.950373] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85223 ] 00:15:59.293 [2024-11-08 02:20:01.092739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.293 [2024-11-08 02:20:01.136160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.552 [2024-11-08 02:20:01.252214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:59.552 [2024-11-08 02:20:01.283967] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:00.490 02:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.490 02:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:00.490 02:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:00.490 02:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:16:00.490 02:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.490 02:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:00.749 Running I/O for 1 seconds... 00:16:01.687 3965.00 IOPS, 15.49 MiB/s 00:16:01.687 Latency(us) 00:16:01.687 [2024-11-08T02:20:03.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.687 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:01.687 Verification LBA range: start 0x0 length 0x2000 00:16:01.687 nvme0n1 : 1.03 3965.76 15.49 0.00 0.00 31926.34 7745.16 20971.52 00:16:01.687 [2024-11-08T02:20:03.571Z] =================================================================================================================== 00:16:01.687 [2024-11-08T02:20:03.571Z] Total : 3965.76 15.49 0.00 0.00 31926.34 7745.16 20971.52 00:16:01.687 { 00:16:01.687 "results": [ 00:16:01.687 { 00:16:01.687 "job": "nvme0n1", 00:16:01.687 "core_mask": "0x2", 00:16:01.687 "workload": "verify", 00:16:01.687 "status": "finished", 00:16:01.687 "verify_range": { 00:16:01.687 "start": 0, 00:16:01.687 "length": 8192 00:16:01.687 }, 00:16:01.687 "queue_depth": 128, 00:16:01.687 "io_size": 4096, 00:16:01.687 "runtime": 1.032337, 00:16:01.687 "iops": 3965.7592433478603, 00:16:01.687 "mibps": 15.49124704432758, 00:16:01.687 "io_failed": 0, 00:16:01.687 "io_timeout": 0, 00:16:01.687 "avg_latency_us": 31926.33812319581, 00:16:01.687 "min_latency_us": 7745.163636363636, 00:16:01.687 "max_latency_us": 20971.52 00:16:01.687 } 00:16:01.687 ], 00:16:01.687 "core_count": 1 00:16:01.687 } 00:16:01.687 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:16:01.687 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:16:01.687 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:01.687 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:16:01.687 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:16:01.687 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:16:01.687 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:01.687 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:16:01.687 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:16:01.687 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:16:01.687 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:01.687 nvmf_trace.0 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 85223 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85223 ']' 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85223 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85223 00:16:01.946 killing process with pid 85223 00:16:01.946 Received shutdown signal, test time was about 1.000000 seconds 00:16:01.946 00:16:01.946 Latency(us) 00:16:01.946 [2024-11-08T02:20:03.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.946 [2024-11-08T02:20:03.830Z] =================================================================================================================== 00:16:01.946 [2024-11-08T02:20:03.830Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85223' 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85223 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85223 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:01.946 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:02.206 rmmod nvme_tcp 00:16:02.206 rmmod nvme_fabrics 00:16:02.206 rmmod nvme_keyring 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 85191 ']' 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 85191 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85191 ']' 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85191 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85191 00:16:02.206 killing process with pid 85191 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85191' 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85191 00:16:02.206 02:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85191 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.465 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.72WxUXgdwe /tmp/tmp.H8xE0lKhZP /tmp/tmp.hiYTAxHGEA 00:16:02.724 00:16:02.724 real 1m21.796s 00:16:02.724 user 2m12.646s 00:16:02.724 sys 0m26.570s 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.724 ************************************ 00:16:02.724 END TEST nvmf_tls 00:16:02.724 ************************************ 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:02.724 ************************************ 00:16:02.724 START TEST nvmf_fips 00:16:02.724 ************************************ 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:02.724 * Looking for test storage... 00:16:02.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.724 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.725 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:02.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.985 --rc genhtml_branch_coverage=1 00:16:02.985 --rc genhtml_function_coverage=1 00:16:02.985 --rc genhtml_legend=1 00:16:02.985 --rc geninfo_all_blocks=1 00:16:02.985 --rc geninfo_unexecuted_blocks=1 00:16:02.985 00:16:02.985 ' 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:02.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.985 --rc genhtml_branch_coverage=1 00:16:02.985 --rc genhtml_function_coverage=1 00:16:02.985 --rc genhtml_legend=1 00:16:02.985 --rc geninfo_all_blocks=1 00:16:02.985 --rc geninfo_unexecuted_blocks=1 00:16:02.985 00:16:02.985 ' 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:02.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.985 --rc genhtml_branch_coverage=1 00:16:02.985 --rc genhtml_function_coverage=1 00:16:02.985 --rc genhtml_legend=1 00:16:02.985 --rc geninfo_all_blocks=1 00:16:02.985 --rc geninfo_unexecuted_blocks=1 00:16:02.985 00:16:02.985 ' 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:02.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.985 --rc genhtml_branch_coverage=1 00:16:02.985 --rc genhtml_function_coverage=1 00:16:02.985 --rc genhtml_legend=1 00:16:02.985 --rc geninfo_all_blocks=1 00:16:02.985 --rc geninfo_unexecuted_blocks=1 00:16:02.985 00:16:02.985 ' 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:02.985 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.985 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:16:02.986 Error setting digest 00:16:02.986 40F2C4CDB27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:16:02.986 40F2C4CDB27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:02.986 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:02.987 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:02.987 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:02.987 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:02.987 Cannot find device "nvmf_init_br" 00:16:02.987 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:02.987 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:02.987 Cannot find device "nvmf_init_br2" 00:16:02.987 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:02.987 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:02.987 Cannot find device "nvmf_tgt_br" 00:16:02.987 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:16:02.987 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:03.246 Cannot find device "nvmf_tgt_br2" 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:03.246 Cannot find device "nvmf_init_br" 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:03.246 Cannot find device "nvmf_init_br2" 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:03.246 Cannot find device "nvmf_tgt_br" 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:03.246 Cannot find device "nvmf_tgt_br2" 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:03.246 Cannot find device "nvmf_br" 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:03.246 Cannot find device "nvmf_init_if" 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:03.246 Cannot find device "nvmf_init_if2" 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:03.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:03.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:03.246 02:20:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:03.246 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:03.505 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:03.505 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:03.505 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:03.505 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:03.505 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:03.505 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:03.505 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:03.505 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:03.505 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:03.505 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:16:03.505 00:16:03.505 --- 10.0.0.3 ping statistics --- 00:16:03.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.505 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:03.505 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:03.505 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:03.505 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:16:03.505 00:16:03.505 --- 10.0.0.4 ping statistics --- 00:16:03.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.505 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:03.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:03.506 00:16:03.506 --- 10.0.0.1 ping statistics --- 00:16:03.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.506 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:03.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:16:03.506 00:16:03.506 --- 10.0.0.2 ping statistics --- 00:16:03.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.506 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=85546 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 85546 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 85546 ']' 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:03.506 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:03.506 [2024-11-08 02:20:05.302236] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:03.506 [2024-11-08 02:20:05.302553] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.765 [2024-11-08 02:20:05.445503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.765 [2024-11-08 02:20:05.487720] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.765 [2024-11-08 02:20:05.487793] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.765 [2024-11-08 02:20:05.487815] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.765 [2024-11-08 02:20:05.487825] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.765 [2024-11-08 02:20:05.487833] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.765 [2024-11-08 02:20:05.487865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.765 [2024-11-08 02:20:05.521905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.SJR 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.SJR 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.SJR 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.SJR 00:16:03.765 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:04.025 [2024-11-08 02:20:05.876941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.025 [2024-11-08 02:20:05.892941] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:04.025 [2024-11-08 02:20:05.893166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:04.284 malloc0 00:16:04.284 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:04.284 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85576 00:16:04.284 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:04.284 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85576 /var/tmp/bdevperf.sock 00:16:04.284 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 85576 ']' 00:16:04.284 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:04.284 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:04.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:04.284 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:04.284 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:04.284 02:20:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:04.284 [2024-11-08 02:20:06.054681] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:04.284 [2024-11-08 02:20:06.054824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85576 ] 00:16:04.542 [2024-11-08 02:20:06.198464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.542 [2024-11-08 02:20:06.243315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.542 [2024-11-08 02:20:06.278687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:05.478 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:05.478 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:16:05.478 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.SJR 00:16:05.478 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:05.737 [2024-11-08 02:20:07.545833] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:05.995 TLSTESTn1 00:16:05.995 02:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:05.995 Running I/O for 10 seconds... 00:16:07.868 3850.00 IOPS, 15.04 MiB/s [2024-11-08T02:20:11.128Z] 3934.50 IOPS, 15.37 MiB/s [2024-11-08T02:20:12.063Z] 3961.67 IOPS, 15.48 MiB/s [2024-11-08T02:20:13.040Z] 3976.00 IOPS, 15.53 MiB/s [2024-11-08T02:20:13.976Z] 3980.00 IOPS, 15.55 MiB/s [2024-11-08T02:20:14.911Z] 3986.83 IOPS, 15.57 MiB/s [2024-11-08T02:20:15.847Z] 3989.57 IOPS, 15.58 MiB/s [2024-11-08T02:20:16.782Z] 3988.88 IOPS, 15.58 MiB/s [2024-11-08T02:20:18.158Z] 4040.22 IOPS, 15.78 MiB/s [2024-11-08T02:20:18.158Z] 4112.70 IOPS, 16.07 MiB/s 00:16:16.274 Latency(us) 00:16:16.274 [2024-11-08T02:20:18.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.274 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:16.274 Verification LBA range: start 0x0 length 0x2000 00:16:16.274 TLSTESTn1 : 10.01 4119.11 16.09 0.00 0.00 31019.47 5332.25 24784.52 00:16:16.274 [2024-11-08T02:20:18.158Z] =================================================================================================================== 00:16:16.274 [2024-11-08T02:20:18.158Z] Total : 4119.11 16.09 0.00 0.00 31019.47 5332.25 24784.52 00:16:16.274 { 00:16:16.274 "results": [ 00:16:16.274 { 00:16:16.274 "job": "TLSTESTn1", 00:16:16.274 "core_mask": "0x4", 00:16:16.274 "workload": "verify", 00:16:16.274 "status": "finished", 00:16:16.275 "verify_range": { 00:16:16.275 "start": 0, 00:16:16.275 "length": 8192 00:16:16.275 }, 00:16:16.275 "queue_depth": 128, 00:16:16.275 "io_size": 4096, 00:16:16.275 "runtime": 10.014298, 00:16:16.275 "iops": 4119.110495813086, 00:16:16.275 "mibps": 16.09027537426987, 00:16:16.275 "io_failed": 0, 00:16:16.275 "io_timeout": 0, 00:16:16.275 "avg_latency_us": 31019.472439008267, 00:16:16.275 "min_latency_us": 5332.2472727272725, 00:16:16.275 "max_latency_us": 24784.523636363636 00:16:16.275 } 00:16:16.275 ], 00:16:16.275 "core_count": 1 00:16:16.275 } 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:16.275 nvmf_trace.0 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85576 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 85576 ']' 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 85576 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85576 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85576' 00:16:16.275 killing process with pid 85576 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 85576 00:16:16.275 Received shutdown signal, test time was about 10.000000 seconds 00:16:16.275 00:16:16.275 Latency(us) 00:16:16.275 [2024-11-08T02:20:18.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.275 [2024-11-08T02:20:18.159Z] =================================================================================================================== 00:16:16.275 [2024-11-08T02:20:18.159Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:16.275 02:20:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 85576 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:16.275 rmmod nvme_tcp 00:16:16.275 rmmod nvme_fabrics 00:16:16.275 rmmod nvme_keyring 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 85546 ']' 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 85546 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 85546 ']' 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 85546 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:16:16.275 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85546 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:16.534 killing process with pid 85546 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85546' 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 85546 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 85546 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:16.534 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.SJR 00:16:16.793 00:16:16.793 real 0m14.140s 00:16:16.793 user 0m19.893s 00:16:16.793 sys 0m5.601s 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:16.793 ************************************ 00:16:16.793 END TEST nvmf_fips 00:16:16.793 ************************************ 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:16.793 ************************************ 00:16:16.793 START TEST nvmf_control_msg_list 00:16:16.793 ************************************ 00:16:16.793 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:17.053 * Looking for test storage... 00:16:17.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:17.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.053 --rc genhtml_branch_coverage=1 00:16:17.053 --rc genhtml_function_coverage=1 00:16:17.053 --rc genhtml_legend=1 00:16:17.053 --rc geninfo_all_blocks=1 00:16:17.053 --rc geninfo_unexecuted_blocks=1 00:16:17.053 00:16:17.053 ' 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:17.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.053 --rc genhtml_branch_coverage=1 00:16:17.053 --rc genhtml_function_coverage=1 00:16:17.053 --rc genhtml_legend=1 00:16:17.053 --rc geninfo_all_blocks=1 00:16:17.053 --rc geninfo_unexecuted_blocks=1 00:16:17.053 00:16:17.053 ' 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:17.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.053 --rc genhtml_branch_coverage=1 00:16:17.053 --rc genhtml_function_coverage=1 00:16:17.053 --rc genhtml_legend=1 00:16:17.053 --rc geninfo_all_blocks=1 00:16:17.053 --rc geninfo_unexecuted_blocks=1 00:16:17.053 00:16:17.053 ' 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:17.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.053 --rc genhtml_branch_coverage=1 00:16:17.053 --rc genhtml_function_coverage=1 00:16:17.053 --rc genhtml_legend=1 00:16:17.053 --rc geninfo_all_blocks=1 00:16:17.053 --rc geninfo_unexecuted_blocks=1 00:16:17.053 00:16:17.053 ' 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.053 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:17.054 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:17.054 Cannot find device "nvmf_init_br" 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:17.054 Cannot find device "nvmf_init_br2" 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:17.054 Cannot find device "nvmf_tgt_br" 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:17.054 Cannot find device "nvmf_tgt_br2" 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:17.054 Cannot find device "nvmf_init_br" 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:17.054 Cannot find device "nvmf_init_br2" 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:17.054 Cannot find device "nvmf_tgt_br" 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:17.054 Cannot find device "nvmf_tgt_br2" 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:16:17.054 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:17.054 Cannot find device "nvmf_br" 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:17.313 Cannot find device "nvmf_init_if" 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:17.313 Cannot find device "nvmf_init_if2" 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:17.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:17.313 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:17.313 02:20:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:17.313 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:17.313 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:17.313 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:17.313 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:17.313 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:17.313 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:17.313 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:17.313 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:17.313 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:17.313 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:17.313 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:17.313 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:17.313 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:17.314 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:17.314 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:16:17.314 00:16:17.314 --- 10.0.0.3 ping statistics --- 00:16:17.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.314 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:17.314 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:17.314 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:16:17.314 00:16:17.314 --- 10.0.0.4 ping statistics --- 00:16:17.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.314 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:17.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:17.314 00:16:17.314 --- 10.0.0.1 ping statistics --- 00:16:17.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.314 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:17.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:16:17.314 00:16:17.314 --- 10.0.0.2 ping statistics --- 00:16:17.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.314 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:17.314 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:17.573 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:16:17.573 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:17.573 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:17.573 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:17.573 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=85966 00:16:17.573 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:17.573 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 85966 00:16:17.573 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 85966 ']' 00:16:17.573 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.573 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:17.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.573 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.573 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:17.573 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:17.573 [2024-11-08 02:20:19.271440] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:17.573 [2024-11-08 02:20:19.272063] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.573 [2024-11-08 02:20:19.414835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.835 [2024-11-08 02:20:19.457063] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.835 [2024-11-08 02:20:19.457140] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.835 [2024-11-08 02:20:19.457157] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.835 [2024-11-08 02:20:19.457167] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.835 [2024-11-08 02:20:19.457176] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.835 [2024-11-08 02:20:19.457209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.835 [2024-11-08 02:20:19.491919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:17.835 [2024-11-08 02:20:19.587066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:17.835 Malloc0 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:17.835 [2024-11-08 02:20:19.643453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85985 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85986 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85987 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:17.835 02:20:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85985 00:16:18.096 [2024-11-08 02:20:19.821760] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:18.096 [2024-11-08 02:20:19.832397] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:18.096 [2024-11-08 02:20:19.832637] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:19.036 Initializing NVMe Controllers 00:16:19.036 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:19.036 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:16:19.036 Initialization complete. Launching workers. 00:16:19.036 ======================================================== 00:16:19.036 Latency(us) 00:16:19.036 Device Information : IOPS MiB/s Average min max 00:16:19.036 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3725.00 14.55 268.13 173.01 662.47 00:16:19.036 ======================================================== 00:16:19.036 Total : 3725.00 14.55 268.13 173.01 662.47 00:16:19.036 00:16:19.036 Initializing NVMe Controllers 00:16:19.036 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:19.036 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:16:19.036 Initialization complete. Launching workers. 00:16:19.036 ======================================================== 00:16:19.036 Latency(us) 00:16:19.036 Device Information : IOPS MiB/s Average min max 00:16:19.036 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3734.98 14.59 267.38 160.53 867.72 00:16:19.036 ======================================================== 00:16:19.037 Total : 3734.98 14.59 267.38 160.53 867.72 00:16:19.037 00:16:19.037 Initializing NVMe Controllers 00:16:19.037 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:19.037 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:16:19.037 Initialization complete. Launching workers. 00:16:19.037 ======================================================== 00:16:19.037 Latency(us) 00:16:19.037 Device Information : IOPS MiB/s Average min max 00:16:19.037 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3729.96 14.57 267.74 161.37 567.05 00:16:19.037 ======================================================== 00:16:19.037 Total : 3729.96 14.57 267.74 161.37 567.05 00:16:19.037 00:16:19.037 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85986 00:16:19.037 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85987 00:16:19.037 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:19.037 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:16:19.037 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:19.037 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:16:19.037 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:19.037 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:16:19.037 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:19.037 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:19.296 rmmod nvme_tcp 00:16:19.296 rmmod nvme_fabrics 00:16:19.296 rmmod nvme_keyring 00:16:19.296 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:19.296 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:16:19.296 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:16:19.296 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 85966 ']' 00:16:19.296 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 85966 00:16:19.296 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 85966 ']' 00:16:19.296 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 85966 00:16:19.296 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:16:19.296 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.296 02:20:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85966 00:16:19.296 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.296 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.296 killing process with pid 85966 00:16:19.296 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85966' 00:16:19.296 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 85966 00:16:19.296 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 85966 00:16:19.296 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:19.296 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:19.296 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:19.296 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:16:19.296 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:16:19.296 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:19.296 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:16:19.296 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:19.297 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:19.297 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:19.297 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:16:19.556 00:16:19.556 real 0m2.782s 00:16:19.556 user 0m4.673s 00:16:19.556 sys 0m1.309s 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.556 ************************************ 00:16:19.556 END TEST nvmf_control_msg_list 00:16:19.556 ************************************ 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:19.556 ************************************ 00:16:19.556 START TEST nvmf_wait_for_buf 00:16:19.556 ************************************ 00:16:19.556 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:19.816 * Looking for test storage... 00:16:19.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:19.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.816 --rc genhtml_branch_coverage=1 00:16:19.816 --rc genhtml_function_coverage=1 00:16:19.816 --rc genhtml_legend=1 00:16:19.816 --rc geninfo_all_blocks=1 00:16:19.816 --rc geninfo_unexecuted_blocks=1 00:16:19.816 00:16:19.816 ' 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:19.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.816 --rc genhtml_branch_coverage=1 00:16:19.816 --rc genhtml_function_coverage=1 00:16:19.816 --rc genhtml_legend=1 00:16:19.816 --rc geninfo_all_blocks=1 00:16:19.816 --rc geninfo_unexecuted_blocks=1 00:16:19.816 00:16:19.816 ' 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:19.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.816 --rc genhtml_branch_coverage=1 00:16:19.816 --rc genhtml_function_coverage=1 00:16:19.816 --rc genhtml_legend=1 00:16:19.816 --rc geninfo_all_blocks=1 00:16:19.816 --rc geninfo_unexecuted_blocks=1 00:16:19.816 00:16:19.816 ' 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:19.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.816 --rc genhtml_branch_coverage=1 00:16:19.816 --rc genhtml_function_coverage=1 00:16:19.816 --rc genhtml_legend=1 00:16:19.816 --rc geninfo_all_blocks=1 00:16:19.816 --rc geninfo_unexecuted_blocks=1 00:16:19.816 00:16:19.816 ' 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:19.816 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:19.817 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:19.817 Cannot find device "nvmf_init_br" 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:19.817 Cannot find device "nvmf_init_br2" 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:19.817 Cannot find device "nvmf_tgt_br" 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:19.817 Cannot find device "nvmf_tgt_br2" 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:19.817 Cannot find device "nvmf_init_br" 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:16:19.817 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:20.077 Cannot find device "nvmf_init_br2" 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:20.077 Cannot find device "nvmf_tgt_br" 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:20.077 Cannot find device "nvmf_tgt_br2" 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:20.077 Cannot find device "nvmf_br" 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:20.077 Cannot find device "nvmf_init_if" 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:20.077 Cannot find device "nvmf_init_if2" 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:20.077 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:20.337 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:20.337 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:20.337 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:20.337 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:20.337 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:20.337 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:20.337 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:20.337 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:20.337 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:20.337 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:16:20.337 00:16:20.337 --- 10.0.0.3 ping statistics --- 00:16:20.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.337 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:16:20.337 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:20.337 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:20.337 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:16:20.337 00:16:20.337 --- 10.0.0.4 ping statistics --- 00:16:20.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.337 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:20.337 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:20.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:16:20.337 00:16:20.337 --- 10.0.0.1 ping statistics --- 00:16:20.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.337 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:16:20.337 02:20:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:20.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:16:20.337 00:16:20.337 --- 10.0.0.2 ping statistics --- 00:16:20.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.337 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=86218 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 86218 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 86218 ']' 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.337 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:20.337 [2024-11-08 02:20:22.092789] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:20.337 [2024-11-08 02:20:22.092879] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.597 [2024-11-08 02:20:22.231920] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.597 [2024-11-08 02:20:22.262891] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.597 [2024-11-08 02:20:22.262961] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.597 [2024-11-08 02:20:22.262988] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.597 [2024-11-08 02:20:22.262996] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.597 [2024-11-08 02:20:22.263003] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.597 [2024-11-08 02:20:22.263029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:20.597 [2024-11-08 02:20:22.427996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:20.597 Malloc0 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:20.597 [2024-11-08 02:20:22.467944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.597 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:20.857 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.857 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:20.857 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.857 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:20.857 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.857 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:20.857 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.857 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:20.857 [2024-11-08 02:20:22.500047] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:20.857 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.857 02:20:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:20.857 [2024-11-08 02:20:22.677192] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:22.234 Initializing NVMe Controllers 00:16:22.234 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:22.234 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:16:22.234 Initialization complete. Launching workers. 00:16:22.234 ======================================================== 00:16:22.234 Latency(us) 00:16:22.234 Device Information : IOPS MiB/s Average min max 00:16:22.234 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 506.48 63.31 7898.08 6932.55 8996.91 00:16:22.234 ======================================================== 00:16:22.234 Total : 506.48 63.31 7898.08 6932.55 8996.91 00:16:22.234 00:16:22.234 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:16:22.234 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.234 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:16:22.234 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:22.234 02:20:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.234 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4826 00:16:22.234 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4826 -eq 0 ]] 00:16:22.234 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:22.234 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:16:22.234 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:22.234 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:16:22.234 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.234 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:16:22.234 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.234 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.234 rmmod nvme_tcp 00:16:22.234 rmmod nvme_fabrics 00:16:22.234 rmmod nvme_keyring 00:16:22.234 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.234 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:16:22.234 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:16:22.493 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 86218 ']' 00:16:22.493 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 86218 00:16:22.493 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 86218 ']' 00:16:22.493 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 86218 00:16:22.493 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:16:22.493 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:22.493 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86218 00:16:22.493 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:22.493 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:22.493 killing process with pid 86218 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86218' 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 86218 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 86218 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:22.494 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:16:22.753 00:16:22.753 real 0m3.095s 00:16:22.753 user 0m2.502s 00:16:22.753 sys 0m0.734s 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.753 ************************************ 00:16:22.753 END TEST nvmf_wait_for_buf 00:16:22.753 ************************************ 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:22.753 ************************************ 00:16:22.753 START TEST nvmf_fuzz 00:16:22.753 ************************************ 00:16:22.753 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:16:23.013 * Looking for test storage... 00:16:23.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.013 --rc genhtml_branch_coverage=1 00:16:23.013 --rc genhtml_function_coverage=1 00:16:23.013 --rc genhtml_legend=1 00:16:23.013 --rc geninfo_all_blocks=1 00:16:23.013 --rc geninfo_unexecuted_blocks=1 00:16:23.013 00:16:23.013 ' 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.013 --rc genhtml_branch_coverage=1 00:16:23.013 --rc genhtml_function_coverage=1 00:16:23.013 --rc genhtml_legend=1 00:16:23.013 --rc geninfo_all_blocks=1 00:16:23.013 --rc geninfo_unexecuted_blocks=1 00:16:23.013 00:16:23.013 ' 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.013 --rc genhtml_branch_coverage=1 00:16:23.013 --rc genhtml_function_coverage=1 00:16:23.013 --rc genhtml_legend=1 00:16:23.013 --rc geninfo_all_blocks=1 00:16:23.013 --rc geninfo_unexecuted_blocks=1 00:16:23.013 00:16:23.013 ' 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:23.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.013 --rc genhtml_branch_coverage=1 00:16:23.013 --rc genhtml_function_coverage=1 00:16:23.013 --rc genhtml_legend=1 00:16:23.013 --rc geninfo_all_blocks=1 00:16:23.013 --rc geninfo_unexecuted_blocks=1 00:16:23.013 00:16:23.013 ' 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.013 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:23.014 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:23.014 Cannot find device "nvmf_init_br" 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:23.014 Cannot find device "nvmf_init_br2" 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:23.014 Cannot find device "nvmf_tgt_br" 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.014 Cannot find device "nvmf_tgt_br2" 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:23.014 Cannot find device "nvmf_init_br" 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:23.014 Cannot find device "nvmf_init_br2" 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:23.014 Cannot find device "nvmf_tgt_br" 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:23.014 Cannot find device "nvmf_tgt_br2" 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:23.014 Cannot find device "nvmf_br" 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:16:23.014 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:23.274 Cannot find device "nvmf_init_if" 00:16:23.274 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:16:23.274 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:23.274 Cannot find device "nvmf_init_if2" 00:16:23.274 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:16:23.274 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.274 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:16:23.274 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.274 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:16:23.274 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.274 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.274 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:23.274 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.274 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.274 02:20:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.274 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:23.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:16:23.534 00:16:23.534 --- 10.0.0.3 ping statistics --- 00:16:23.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.534 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:23.534 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:23.534 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:16:23.534 00:16:23.534 --- 10.0.0.4 ping statistics --- 00:16:23.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.534 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:23.534 00:16:23.534 --- 10.0.0.1 ping statistics --- 00:16:23.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.534 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:23.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:16:23.534 00:16:23.534 --- 10.0.0.2 ping statistics --- 00:16:23.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.534 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # return 0 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=86483 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 86483 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 86483 ']' 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:23.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:23.534 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.794 Malloc0 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:16:23.794 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:16:24.053 Shutting down the fuzz application 00:16:24.053 02:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:16:24.313 Shutting down the fuzz application 00:16:24.313 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.313 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.313 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:24.313 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.313 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:24.313 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:16:24.313 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:24.313 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:16:24.313 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:24.313 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:16:24.313 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:24.313 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:24.313 rmmod nvme_tcp 00:16:24.572 rmmod nvme_fabrics 00:16:24.572 rmmod nvme_keyring 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 86483 ']' 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 86483 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 86483 ']' 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 86483 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86483 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:24.572 killing process with pid 86483 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86483' 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 86483 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 86483 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:24.572 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:16:24.831 00:16:24.831 real 0m2.116s 00:16:24.831 user 0m1.742s 00:16:24.831 sys 0m0.636s 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.831 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:24.832 ************************************ 00:16:24.832 END TEST nvmf_fuzz 00:16:24.832 ************************************ 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.091 ************************************ 00:16:25.091 START TEST nvmf_multiconnection 00:16:25.091 ************************************ 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:16:25.091 * Looking for test storage... 00:16:25.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.091 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:25.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.092 --rc genhtml_branch_coverage=1 00:16:25.092 --rc genhtml_function_coverage=1 00:16:25.092 --rc genhtml_legend=1 00:16:25.092 --rc geninfo_all_blocks=1 00:16:25.092 --rc geninfo_unexecuted_blocks=1 00:16:25.092 00:16:25.092 ' 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:25.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.092 --rc genhtml_branch_coverage=1 00:16:25.092 --rc genhtml_function_coverage=1 00:16:25.092 --rc genhtml_legend=1 00:16:25.092 --rc geninfo_all_blocks=1 00:16:25.092 --rc geninfo_unexecuted_blocks=1 00:16:25.092 00:16:25.092 ' 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:25.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.092 --rc genhtml_branch_coverage=1 00:16:25.092 --rc genhtml_function_coverage=1 00:16:25.092 --rc genhtml_legend=1 00:16:25.092 --rc geninfo_all_blocks=1 00:16:25.092 --rc geninfo_unexecuted_blocks=1 00:16:25.092 00:16:25.092 ' 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:25.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.092 --rc genhtml_branch_coverage=1 00:16:25.092 --rc genhtml_function_coverage=1 00:16:25.092 --rc genhtml_legend=1 00:16:25.092 --rc geninfo_all_blocks=1 00:16:25.092 --rc geninfo_unexecuted_blocks=1 00:16:25.092 00:16:25.092 ' 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.092 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:25.092 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.093 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:25.352 Cannot find device "nvmf_init_br" 00:16:25.352 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:16:25.352 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:25.352 Cannot find device "nvmf_init_br2" 00:16:25.352 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:16:25.352 02:20:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:25.352 Cannot find device "nvmf_tgt_br" 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.352 Cannot find device "nvmf_tgt_br2" 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:25.352 Cannot find device "nvmf_init_br" 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:25.352 Cannot find device "nvmf_init_br2" 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:25.352 Cannot find device "nvmf_tgt_br" 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:25.352 Cannot find device "nvmf_tgt_br2" 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:25.352 Cannot find device "nvmf_br" 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:25.352 Cannot find device "nvmf_init_if" 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:25.352 Cannot find device "nvmf_init_if2" 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:25.352 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:25.611 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:25.611 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:16:25.611 00:16:25.611 --- 10.0.0.3 ping statistics --- 00:16:25.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.611 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:25.611 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:25.611 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:16:25.611 00:16:25.611 --- 10.0.0.4 ping statistics --- 00:16:25.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.611 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:25.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:25.611 00:16:25.611 --- 10.0.0.1 ping statistics --- 00:16:25.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.611 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:25.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:16:25.611 00:16:25.611 --- 10.0.0.2 ping statistics --- 00:16:25.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.611 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # return 0 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=86718 00:16:25.611 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:25.612 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 86718 00:16:25.612 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 86718 ']' 00:16:25.612 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.612 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.612 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.612 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.612 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:25.612 [2024-11-08 02:20:27.457211] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:25.612 [2024-11-08 02:20:27.457489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.871 [2024-11-08 02:20:27.600165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:25.871 [2024-11-08 02:20:27.645350] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.871 [2024-11-08 02:20:27.645415] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.871 [2024-11-08 02:20:27.645429] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.871 [2024-11-08 02:20:27.645438] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.871 [2024-11-08 02:20:27.645447] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.871 [2024-11-08 02:20:27.645609] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.871 [2024-11-08 02:20:27.645750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.871 [2024-11-08 02:20:27.646681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.871 [2024-11-08 02:20:27.646758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.871 [2024-11-08 02:20:27.682878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:25.871 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.871 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:16:25.871 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:25.871 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:25.871 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.131 [2024-11-08 02:20:27.793730] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.131 Malloc1 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.131 [2024-11-08 02:20:27.846915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.131 Malloc2 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.131 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.132 Malloc3 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.132 Malloc4 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.132 Malloc5 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.132 02:20:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:16:26.132 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.132 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.132 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.132 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:16:26.132 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.132 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.400 Malloc6 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.400 Malloc7 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.400 Malloc8 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:16:26.400 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 Malloc9 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 Malloc10 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 Malloc11 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:26.401 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.674 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:16:26.674 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.674 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:26.674 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:16:26.674 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:26.674 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.674 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:26.674 02:20:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:28.577 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:28.577 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:28.577 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:16:28.577 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:28.577 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:28.577 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:28.577 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:28.577 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:16:28.836 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:16:28.836 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:28.836 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:28.836 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:28.836 02:20:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:30.740 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:30.740 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:30.740 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:16:30.740 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:30.740 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:30.740 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:30.740 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:30.740 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:16:30.999 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:16:30.999 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:30.999 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:30.999 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:30.999 02:20:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:32.903 02:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:32.903 02:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:32.903 02:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:16:32.903 02:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:32.903 02:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:32.903 02:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:32.903 02:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:32.903 02:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:16:33.162 02:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:16:33.162 02:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:33.162 02:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:33.162 02:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:33.162 02:20:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:35.066 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:35.066 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:35.066 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:16:35.066 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:35.066 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:35.066 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:35.066 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:35.066 02:20:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:16:35.325 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:16:35.325 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:35.325 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:35.325 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:35.325 02:20:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:37.230 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:37.230 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:37.230 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:16:37.230 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:37.230 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:37.230 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:37.230 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:37.230 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:16:37.489 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:16:37.489 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:37.489 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:37.489 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:37.489 02:20:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:39.393 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:39.393 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:39.393 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:16:39.393 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:39.393 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:39.393 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:39.393 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:39.393 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:16:39.652 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:16:39.652 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:39.652 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:39.652 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:39.652 02:20:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:41.557 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:41.557 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:41.557 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:16:41.557 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:41.557 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.557 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:41.557 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:41.557 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:16:41.816 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:41.816 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:41.816 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.816 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:41.816 02:20:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:43.720 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:43.720 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:43.720 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:16:43.720 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:43.720 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.720 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:43.720 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:43.720 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:16:43.978 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:43.978 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:43.978 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.978 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:43.978 02:20:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:45.883 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:45.883 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:45.883 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:16:45.883 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:45.883 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:45.883 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:45.883 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.883 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:16:46.141 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:46.141 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:46.141 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.141 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:46.142 02:20:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:48.057 02:20:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:48.057 02:20:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:48.057 02:20:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:16:48.057 02:20:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:48.057 02:20:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.057 02:20:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:48.057 02:20:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:48.057 02:20:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:16:48.315 02:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:48.315 02:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:48.315 02:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:48.315 02:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:48.315 02:20:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:50.220 02:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:50.220 02:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:50.220 02:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:16:50.479 02:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:50.479 02:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:50.479 02:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:50.479 02:20:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:50.479 [global] 00:16:50.479 thread=1 00:16:50.479 invalidate=1 00:16:50.479 rw=read 00:16:50.479 time_based=1 00:16:50.479 runtime=10 00:16:50.479 ioengine=libaio 00:16:50.479 direct=1 00:16:50.479 bs=262144 00:16:50.479 iodepth=64 00:16:50.479 norandommap=1 00:16:50.479 numjobs=1 00:16:50.479 00:16:50.479 [job0] 00:16:50.479 filename=/dev/nvme0n1 00:16:50.479 [job1] 00:16:50.479 filename=/dev/nvme10n1 00:16:50.479 [job2] 00:16:50.479 filename=/dev/nvme1n1 00:16:50.479 [job3] 00:16:50.479 filename=/dev/nvme2n1 00:16:50.479 [job4] 00:16:50.479 filename=/dev/nvme3n1 00:16:50.479 [job5] 00:16:50.479 filename=/dev/nvme4n1 00:16:50.479 [job6] 00:16:50.479 filename=/dev/nvme5n1 00:16:50.479 [job7] 00:16:50.479 filename=/dev/nvme6n1 00:16:50.479 [job8] 00:16:50.479 filename=/dev/nvme7n1 00:16:50.479 [job9] 00:16:50.479 filename=/dev/nvme8n1 00:16:50.479 [job10] 00:16:50.479 filename=/dev/nvme9n1 00:16:50.479 Could not set queue depth (nvme0n1) 00:16:50.479 Could not set queue depth (nvme10n1) 00:16:50.479 Could not set queue depth (nvme1n1) 00:16:50.479 Could not set queue depth (nvme2n1) 00:16:50.479 Could not set queue depth (nvme3n1) 00:16:50.480 Could not set queue depth (nvme4n1) 00:16:50.480 Could not set queue depth (nvme5n1) 00:16:50.480 Could not set queue depth (nvme6n1) 00:16:50.480 Could not set queue depth (nvme7n1) 00:16:50.480 Could not set queue depth (nvme8n1) 00:16:50.480 Could not set queue depth (nvme9n1) 00:16:50.739 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:50.739 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:50.739 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:50.739 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:50.739 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:50.739 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:50.739 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:50.739 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:50.739 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:50.739 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:50.739 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:50.739 fio-3.35 00:16:50.739 Starting 11 threads 00:17:02.950 00:17:02.950 job0: (groupid=0, jobs=1): err= 0: pid=87166: Fri Nov 8 02:21:02 2024 00:17:02.950 read: IOPS=338, BW=84.7MiB/s (88.8MB/s)(854MiB/10083msec) 00:17:02.950 slat (usec): min=20, max=107574, avg=2924.34, stdev=7137.98 00:17:02.950 clat (msec): min=15, max=416, avg=185.69, stdev=40.17 00:17:02.950 lat (msec): min=19, max=445, avg=188.62, stdev=40.76 00:17:02.950 clat percentiles (msec): 00:17:02.950 | 1.00th=[ 126], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:17:02.950 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:17:02.950 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 199], 95.00th=[ 241], 00:17:02.950 | 99.00th=[ 388], 99.50th=[ 401], 99.90th=[ 418], 99.95th=[ 418], 00:17:02.950 | 99.99th=[ 418] 00:17:02.950 bw ( KiB/s): min=45056, max=94720, per=15.57%, avg=85811.20, stdev=13450.58, samples=20 00:17:02.950 iops : min= 176, max= 370, avg=335.20, stdev=52.54, samples=20 00:17:02.950 lat (msec) : 20=0.06%, 50=0.32%, 100=0.23%, 250=94.79%, 500=4.60% 00:17:02.950 cpu : usr=0.22%, sys=1.57%, ctx=698, majf=0, minf=4097 00:17:02.950 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:17:02.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:02.950 issued rwts: total=3416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:02.950 job1: (groupid=0, jobs=1): err= 0: pid=87167: Fri Nov 8 02:21:02 2024 00:17:02.950 read: IOPS=99, BW=24.9MiB/s (26.1MB/s)(253MiB/10156msec) 00:17:02.950 slat (usec): min=21, max=211101, avg=9954.72, stdev=26051.46 00:17:02.950 clat (msec): min=21, max=867, avg=632.35, stdev=155.54 00:17:02.950 lat (msec): min=22, max=867, avg=642.31, stdev=157.25 00:17:02.950 clat percentiles (msec): 00:17:02.950 | 1.00th=[ 159], 5.00th=[ 264], 10.00th=[ 422], 20.00th=[ 531], 00:17:02.950 | 30.00th=[ 625], 40.00th=[ 651], 50.00th=[ 676], 60.00th=[ 701], 00:17:02.950 | 70.00th=[ 726], 80.00th=[ 751], 90.00th=[ 776], 95.00th=[ 793], 00:17:02.950 | 99.00th=[ 844], 99.50th=[ 860], 99.90th=[ 869], 99.95th=[ 869], 00:17:02.950 | 99.99th=[ 869] 00:17:02.950 bw ( KiB/s): min=17408, max=36937, per=4.40%, avg=24221.25, stdev=4298.59, samples=20 00:17:02.950 iops : min= 68, max= 144, avg=94.60, stdev=16.75, samples=20 00:17:02.950 lat (msec) : 50=0.89%, 100=0.10%, 250=3.07%, 500=13.56%, 750=62.67% 00:17:02.950 lat (msec) : 1000=19.70% 00:17:02.950 cpu : usr=0.08%, sys=0.52%, ctx=209, majf=0, minf=4097 00:17:02.950 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:17:02.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.950 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:02.950 issued rwts: total=1010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:02.950 job2: (groupid=0, jobs=1): err= 0: pid=87168: Fri Nov 8 02:21:02 2024 00:17:02.950 read: IOPS=186, BW=46.7MiB/s (49.0MB/s)(473MiB/10118msec) 00:17:02.950 slat (usec): min=22, max=148392, avg=5297.72, stdev=14013.15 00:17:02.950 clat (msec): min=44, max=460, avg=336.59, stdev=62.20 00:17:02.950 lat (msec): min=45, max=474, avg=341.89, stdev=62.39 00:17:02.950 clat percentiles (msec): 00:17:02.950 | 1.00th=[ 115], 5.00th=[ 230], 10.00th=[ 262], 20.00th=[ 300], 00:17:02.950 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:17:02.950 | 70.00th=[ 368], 80.00th=[ 380], 90.00th=[ 401], 95.00th=[ 418], 00:17:02.950 | 99.00th=[ 439], 99.50th=[ 451], 99.90th=[ 460], 99.95th=[ 460], 00:17:02.950 | 99.99th=[ 460] 00:17:02.950 bw ( KiB/s): min=39936, max=51200, per=8.49%, avg=46791.95, stdev=2599.74, samples=20 00:17:02.950 iops : min= 156, max= 200, avg=182.70, stdev=10.13, samples=20 00:17:02.950 lat (msec) : 50=0.26%, 100=0.05%, 250=7.35%, 500=92.34% 00:17:02.950 cpu : usr=0.10%, sys=0.90%, ctx=350, majf=0, minf=4097 00:17:02.950 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:17:02.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.950 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:02.950 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.950 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:02.950 job3: (groupid=0, jobs=1): err= 0: pid=87169: Fri Nov 8 02:21:02 2024 00:17:02.950 read: IOPS=254, BW=63.7MiB/s (66.8MB/s)(643MiB/10099msec) 00:17:02.950 slat (usec): min=21, max=101102, avg=3776.00, stdev=9263.37 00:17:02.950 clat (msec): min=20, max=473, avg=247.09, stdev=41.02 00:17:02.950 lat (msec): min=24, max=473, avg=250.86, stdev=41.42 00:17:02.950 clat percentiles (msec): 00:17:02.950 | 1.00th=[ 126], 5.00th=[ 203], 10.00th=[ 213], 20.00th=[ 228], 00:17:02.950 | 30.00th=[ 234], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:17:02.950 | 70.00th=[ 255], 80.00th=[ 264], 90.00th=[ 279], 95.00th=[ 305], 00:17:02.950 | 99.00th=[ 405], 99.50th=[ 430], 99.90th=[ 472], 99.95th=[ 472], 00:17:02.950 | 99.99th=[ 472] 00:17:02.950 bw ( KiB/s): min=45659, max=71168, per=11.65%, avg=64209.35, stdev=6675.55, samples=20 00:17:02.950 iops : min= 178, max= 278, avg=250.80, stdev=26.13, samples=20 00:17:02.950 lat (msec) : 50=0.58%, 100=0.04%, 250=60.96%, 500=38.41% 00:17:02.950 cpu : usr=0.20%, sys=1.18%, ctx=565, majf=0, minf=4097 00:17:02.950 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:17:02.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:02.951 issued rwts: total=2572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:02.951 job4: (groupid=0, jobs=1): err= 0: pid=87170: Fri Nov 8 02:21:02 2024 00:17:02.951 read: IOPS=188, BW=47.2MiB/s (49.5MB/s)(478MiB/10111msec) 00:17:02.951 slat (usec): min=21, max=145050, avg=5229.49, stdev=13323.38 00:17:02.951 clat (msec): min=40, max=431, avg=333.17, stdev=57.02 00:17:02.951 lat (msec): min=40, max=439, avg=338.40, stdev=57.36 00:17:02.951 clat percentiles (msec): 00:17:02.951 | 1.00th=[ 44], 5.00th=[ 247], 10.00th=[ 275], 20.00th=[ 305], 00:17:02.951 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 351], 00:17:02.951 | 70.00th=[ 363], 80.00th=[ 372], 90.00th=[ 384], 95.00th=[ 397], 00:17:02.951 | 99.00th=[ 422], 99.50th=[ 422], 99.90th=[ 430], 99.95th=[ 430], 00:17:02.951 | 99.99th=[ 430] 00:17:02.951 bw ( KiB/s): min=41472, max=50176, per=8.58%, avg=47306.15, stdev=2597.37, samples=20 00:17:02.951 iops : min= 162, max= 196, avg=184.60, stdev=10.07, samples=20 00:17:02.951 lat (msec) : 50=1.05%, 100=0.63%, 250=4.19%, 500=94.14% 00:17:02.951 cpu : usr=0.14%, sys=0.87%, ctx=359, majf=0, minf=4097 00:17:02.951 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:17:02.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.951 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:02.951 issued rwts: total=1910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:02.951 job5: (groupid=0, jobs=1): err= 0: pid=87171: Fri Nov 8 02:21:02 2024 00:17:02.951 read: IOPS=335, BW=83.9MiB/s (88.0MB/s)(846MiB/10073msec) 00:17:02.951 slat (usec): min=21, max=263195, avg=2951.56, stdev=8716.74 00:17:02.951 clat (msec): min=70, max=537, avg=187.48, stdev=50.32 00:17:02.951 lat (msec): min=73, max=669, avg=190.43, stdev=51.09 00:17:02.951 clat percentiles (msec): 00:17:02.951 | 1.00th=[ 88], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 169], 00:17:02.951 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:17:02.951 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 197], 95.00th=[ 215], 00:17:02.951 | 99.00th=[ 451], 99.50th=[ 485], 99.90th=[ 542], 99.95th=[ 542], 00:17:02.951 | 99.99th=[ 542] 00:17:02.951 bw ( KiB/s): min=29184, max=98304, per=15.43%, avg=85007.95, stdev=16365.33, samples=20 00:17:02.951 iops : min= 114, max= 384, avg=331.80, stdev=63.90, samples=20 00:17:02.951 lat (msec) : 100=1.18%, 250=94.77%, 500=3.87%, 750=0.18% 00:17:02.951 cpu : usr=0.19%, sys=1.46%, ctx=680, majf=0, minf=4097 00:17:02.951 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:17:02.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:02.951 issued rwts: total=3382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:02.951 job6: (groupid=0, jobs=1): err= 0: pid=87172: Fri Nov 8 02:21:02 2024 00:17:02.951 read: IOPS=185, BW=46.5MiB/s (48.7MB/s)(470MiB/10110msec) 00:17:02.951 slat (usec): min=21, max=144748, avg=5313.67, stdev=13417.88 00:17:02.951 clat (msec): min=107, max=467, avg=338.43, stdev=42.21 00:17:02.951 lat (msec): min=142, max=467, avg=343.74, stdev=42.07 00:17:02.951 clat percentiles (msec): 00:17:02.951 | 1.00th=[ 203], 5.00th=[ 262], 10.00th=[ 292], 20.00th=[ 313], 00:17:02.951 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 342], 60.00th=[ 351], 00:17:02.951 | 70.00th=[ 359], 80.00th=[ 368], 90.00th=[ 384], 95.00th=[ 397], 00:17:02.951 | 99.00th=[ 426], 99.50th=[ 430], 99.90th=[ 468], 99.95th=[ 468], 00:17:02.951 | 99.99th=[ 468] 00:17:02.951 bw ( KiB/s): min=34885, max=52328, per=8.44%, avg=46537.80, stdev=3459.81, samples=20 00:17:02.951 iops : min= 136, max= 204, avg=181.65, stdev=13.48, samples=20 00:17:02.951 lat (msec) : 250=3.56%, 500=96.44% 00:17:02.951 cpu : usr=0.14%, sys=0.88%, ctx=370, majf=0, minf=4097 00:17:02.951 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:17:02.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.951 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:02.951 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:02.951 job7: (groupid=0, jobs=1): err= 0: pid=87173: Fri Nov 8 02:21:02 2024 00:17:02.951 read: IOPS=100, BW=25.1MiB/s (26.4MB/s)(255MiB/10154msec) 00:17:02.951 slat (usec): min=28, max=198055, avg=9869.90, stdev=26874.55 00:17:02.951 clat (msec): min=65, max=896, avg=625.79, stdev=200.18 00:17:02.951 lat (msec): min=66, max=907, avg=635.66, stdev=202.42 00:17:02.951 clat percentiles (msec): 00:17:02.951 | 1.00th=[ 74], 5.00th=[ 144], 10.00th=[ 330], 20.00th=[ 493], 00:17:02.951 | 30.00th=[ 600], 40.00th=[ 642], 50.00th=[ 684], 60.00th=[ 709], 00:17:02.951 | 70.00th=[ 751], 80.00th=[ 793], 90.00th=[ 827], 95.00th=[ 844], 00:17:02.951 | 99.00th=[ 869], 99.50th=[ 869], 99.90th=[ 894], 99.95th=[ 894], 00:17:02.951 | 99.99th=[ 894] 00:17:02.951 bw ( KiB/s): min=12288, max=45056, per=4.45%, avg=24506.90, stdev=7882.37, samples=20 00:17:02.951 iops : min= 48, max= 176, avg=95.65, stdev=30.78, samples=20 00:17:02.951 lat (msec) : 100=3.62%, 250=3.43%, 500=13.52%, 750=48.09%, 1000=31.34% 00:17:02.951 cpu : usr=0.03%, sys=0.52%, ctx=177, majf=0, minf=4097 00:17:02.951 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:17:02.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.951 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:02.951 issued rwts: total=1021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:02.951 job8: (groupid=0, jobs=1): err= 0: pid=87174: Fri Nov 8 02:21:02 2024 00:17:02.951 read: IOPS=106, BW=26.7MiB/s (28.0MB/s)(271MiB/10142msec) 00:17:02.951 slat (usec): min=20, max=184027, avg=8919.28, stdev=23998.21 00:17:02.951 clat (msec): min=16, max=957, avg=588.57, stdev=210.86 00:17:02.951 lat (msec): min=17, max=957, avg=597.49, stdev=213.73 00:17:02.951 clat percentiles (msec): 00:17:02.951 | 1.00th=[ 144], 5.00th=[ 190], 10.00th=[ 222], 20.00th=[ 292], 00:17:02.951 | 30.00th=[ 600], 40.00th=[ 634], 50.00th=[ 667], 60.00th=[ 693], 00:17:02.951 | 70.00th=[ 718], 80.00th=[ 760], 90.00th=[ 785], 95.00th=[ 818], 00:17:02.951 | 99.00th=[ 860], 99.50th=[ 885], 99.90th=[ 961], 99.95th=[ 961], 00:17:02.951 | 99.99th=[ 961] 00:17:02.951 bw ( KiB/s): min=16896, max=56207, per=4.74%, avg=26147.70, stdev=10631.22, samples=20 00:17:02.951 iops : min= 66, max= 219, avg=102.05, stdev=41.46, samples=20 00:17:02.951 lat (msec) : 20=0.09%, 250=14.47%, 500=11.34%, 750=52.26%, 1000=21.84% 00:17:02.951 cpu : usr=0.04%, sys=0.53%, ctx=227, majf=0, minf=4097 00:17:02.951 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:17:02.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.951 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:02.951 issued rwts: total=1085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:02.951 job9: (groupid=0, jobs=1): err= 0: pid=87175: Fri Nov 8 02:21:02 2024 00:17:02.951 read: IOPS=102, BW=25.6MiB/s (26.9MB/s)(260MiB/10158msec) 00:17:02.951 slat (usec): min=23, max=162421, avg=9659.33, stdev=23814.69 00:17:02.951 clat (msec): min=22, max=809, avg=613.77, stdev=156.49 00:17:02.951 lat (msec): min=23, max=862, avg=623.43, stdev=158.64 00:17:02.951 clat percentiles (msec): 00:17:02.951 | 1.00th=[ 39], 5.00th=[ 264], 10.00th=[ 334], 20.00th=[ 498], 00:17:02.951 | 30.00th=[ 634], 40.00th=[ 651], 50.00th=[ 676], 60.00th=[ 684], 00:17:02.951 | 70.00th=[ 701], 80.00th=[ 718], 90.00th=[ 743], 95.00th=[ 760], 00:17:02.951 | 99.00th=[ 793], 99.50th=[ 810], 99.90th=[ 810], 99.95th=[ 810], 00:17:02.951 | 99.99th=[ 810] 00:17:02.951 bw ( KiB/s): min=19968, max=41984, per=4.54%, avg=25011.20, stdev=5378.31, samples=20 00:17:02.951 iops : min= 78, max= 164, avg=97.70, stdev=21.01, samples=20 00:17:02.951 lat (msec) : 50=1.15%, 100=0.10%, 250=2.02%, 500=16.91%, 750=72.24% 00:17:02.951 lat (msec) : 1000=7.59% 00:17:02.951 cpu : usr=0.06%, sys=0.53%, ctx=212, majf=0, minf=4097 00:17:02.951 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:17:02.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.951 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:02.951 issued rwts: total=1041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:02.951 job10: (groupid=0, jobs=1): err= 0: pid=87176: Fri Nov 8 02:21:02 2024 00:17:02.951 read: IOPS=263, BW=65.8MiB/s (69.0MB/s)(664MiB/10099msec) 00:17:02.951 slat (usec): min=17, max=74743, avg=3760.56, stdev=8907.68 00:17:02.951 clat (msec): min=11, max=379, avg=239.12, stdev=43.83 00:17:02.951 lat (msec): min=12, max=389, avg=242.88, stdev=44.24 00:17:02.951 clat percentiles (msec): 00:17:02.951 | 1.00th=[ 31], 5.00th=[ 184], 10.00th=[ 203], 20.00th=[ 224], 00:17:02.951 | 30.00th=[ 230], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 249], 00:17:02.951 | 70.00th=[ 255], 80.00th=[ 264], 90.00th=[ 279], 95.00th=[ 296], 00:17:02.951 | 99.00th=[ 342], 99.50th=[ 363], 99.90th=[ 380], 99.95th=[ 380], 00:17:02.951 | 99.99th=[ 380] 00:17:02.951 bw ( KiB/s): min=52736, max=83623, per=12.05%, avg=66389.15, stdev=6155.67, samples=20 00:17:02.951 iops : min= 206, max= 326, avg=259.30, stdev=23.95, samples=20 00:17:02.951 lat (msec) : 20=0.34%, 50=1.20%, 100=0.68%, 250=60.56%, 500=37.22% 00:17:02.951 cpu : usr=0.13%, sys=1.27%, ctx=544, majf=0, minf=4097 00:17:02.951 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:17:02.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:02.951 issued rwts: total=2657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.951 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:02.951 00:17:02.951 Run status group 0 (all jobs): 00:17:02.951 READ: bw=538MiB/s (564MB/s), 24.9MiB/s-84.7MiB/s (26.1MB/s-88.8MB/s), io=5467MiB (5732MB), run=10073-10158msec 00:17:02.951 00:17:02.951 Disk stats (read/write): 00:17:02.951 nvme0n1: ios=6714/0, merge=0/0, ticks=1230945/0, in_queue=1230945, util=97.91% 00:17:02.951 nvme10n1: ios=1893/0, merge=0/0, ticks=1203145/0, in_queue=1203145, util=97.95% 00:17:02.951 nvme1n1: ios=3656/0, merge=0/0, ticks=1225821/0, in_queue=1225821, util=98.16% 00:17:02.951 nvme2n1: ios=5016/0, merge=0/0, ticks=1230688/0, in_queue=1230688, util=98.24% 00:17:02.951 nvme3n1: ios=3695/0, merge=0/0, ticks=1227693/0, in_queue=1227693, util=98.23% 00:17:02.952 nvme4n1: ios=6637/0, merge=0/0, ticks=1234610/0, in_queue=1234610, util=98.41% 00:17:02.952 nvme5n1: ios=3649/0, merge=0/0, ticks=1226640/0, in_queue=1226640, util=98.57% 00:17:02.952 nvme6n1: ios=1915/0, merge=0/0, ticks=1204678/0, in_queue=1204678, util=98.66% 00:17:02.952 nvme7n1: ios=2046/0, merge=0/0, ticks=1194096/0, in_queue=1194096, util=98.85% 00:17:02.952 nvme8n1: ios=1954/0, merge=0/0, ticks=1204002/0, in_queue=1204002, util=99.09% 00:17:02.952 nvme9n1: ios=5193/0, merge=0/0, ticks=1230477/0, in_queue=1230477, util=99.19% 00:17:02.952 02:21:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:17:02.952 [global] 00:17:02.952 thread=1 00:17:02.952 invalidate=1 00:17:02.952 rw=randwrite 00:17:02.952 time_based=1 00:17:02.952 runtime=10 00:17:02.952 ioengine=libaio 00:17:02.952 direct=1 00:17:02.952 bs=262144 00:17:02.952 iodepth=64 00:17:02.952 norandommap=1 00:17:02.952 numjobs=1 00:17:02.952 00:17:02.952 [job0] 00:17:02.952 filename=/dev/nvme0n1 00:17:02.952 [job1] 00:17:02.952 filename=/dev/nvme10n1 00:17:02.952 [job2] 00:17:02.952 filename=/dev/nvme1n1 00:17:02.952 [job3] 00:17:02.952 filename=/dev/nvme2n1 00:17:02.952 [job4] 00:17:02.952 filename=/dev/nvme3n1 00:17:02.952 [job5] 00:17:02.952 filename=/dev/nvme4n1 00:17:02.952 [job6] 00:17:02.952 filename=/dev/nvme5n1 00:17:02.952 [job7] 00:17:02.952 filename=/dev/nvme6n1 00:17:02.952 [job8] 00:17:02.952 filename=/dev/nvme7n1 00:17:02.952 [job9] 00:17:02.952 filename=/dev/nvme8n1 00:17:02.952 [job10] 00:17:02.952 filename=/dev/nvme9n1 00:17:02.952 Could not set queue depth (nvme0n1) 00:17:02.952 Could not set queue depth (nvme10n1) 00:17:02.952 Could not set queue depth (nvme1n1) 00:17:02.952 Could not set queue depth (nvme2n1) 00:17:02.952 Could not set queue depth (nvme3n1) 00:17:02.952 Could not set queue depth (nvme4n1) 00:17:02.952 Could not set queue depth (nvme5n1) 00:17:02.952 Could not set queue depth (nvme6n1) 00:17:02.952 Could not set queue depth (nvme7n1) 00:17:02.952 Could not set queue depth (nvme8n1) 00:17:02.952 Could not set queue depth (nvme9n1) 00:17:02.952 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:02.952 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:02.952 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:02.952 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:02.952 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:02.952 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:02.952 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:02.952 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:02.952 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:02.952 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:02.952 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:02.952 fio-3.35 00:17:02.952 Starting 11 threads 00:17:12.936 00:17:12.936 job0: (groupid=0, jobs=1): err= 0: pid=87376: Fri Nov 8 02:21:13 2024 00:17:12.936 write: IOPS=148, BW=37.0MiB/s (38.8MB/s)(379MiB/10237msec); 0 zone resets 00:17:12.936 slat (usec): min=17, max=51466, avg=6591.05, stdev=11844.40 00:17:12.936 clat (msec): min=37, max=700, avg=425.34, stdev=72.94 00:17:12.936 lat (msec): min=37, max=700, avg=431.93, stdev=73.33 00:17:12.936 clat percentiles (msec): 00:17:12.936 | 1.00th=[ 84], 5.00th=[ 288], 10.00th=[ 380], 20.00th=[ 409], 00:17:12.936 | 30.00th=[ 422], 40.00th=[ 435], 50.00th=[ 439], 60.00th=[ 447], 00:17:12.936 | 70.00th=[ 456], 80.00th=[ 464], 90.00th=[ 472], 95.00th=[ 477], 00:17:12.936 | 99.00th=[ 575], 99.50th=[ 642], 99.90th=[ 701], 99.95th=[ 701], 00:17:12.936 | 99.99th=[ 701] 00:17:12.936 bw ( KiB/s): min=33280, max=49250, per=4.91%, avg=37201.70, stdev=3455.67, samples=20 00:17:12.936 iops : min= 130, max= 192, avg=145.30, stdev=13.43, samples=20 00:17:12.936 lat (msec) : 50=0.26%, 100=1.06%, 250=2.44%, 500=94.53%, 750=1.72% 00:17:12.936 cpu : usr=0.28%, sys=0.47%, ctx=1733, majf=0, minf=1 00:17:12.936 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:17:12.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.936 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:12.936 issued rwts: total=0,1516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.936 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:12.936 job1: (groupid=0, jobs=1): err= 0: pid=87377: Fri Nov 8 02:21:13 2024 00:17:12.936 write: IOPS=137, BW=34.5MiB/s (36.1MB/s)(353MiB/10245msec); 0 zone resets 00:17:12.936 slat (usec): min=20, max=97394, avg=7085.44, stdev=13572.33 00:17:12.936 clat (msec): min=36, max=686, avg=456.88, stdev=78.95 00:17:12.936 lat (msec): min=36, max=686, avg=463.96, stdev=79.25 00:17:12.936 clat percentiles (msec): 00:17:12.936 | 1.00th=[ 84], 5.00th=[ 317], 10.00th=[ 409], 20.00th=[ 447], 00:17:12.936 | 30.00th=[ 460], 40.00th=[ 468], 50.00th=[ 481], 60.00th=[ 485], 00:17:12.936 | 70.00th=[ 489], 80.00th=[ 493], 90.00th=[ 502], 95.00th=[ 506], 00:17:12.936 | 99.00th=[ 575], 99.50th=[ 625], 99.90th=[ 684], 99.95th=[ 684], 00:17:12.936 | 99.99th=[ 684] 00:17:12.936 bw ( KiB/s): min=32768, max=45056, per=4.55%, avg=34508.80, stdev=2991.91, samples=20 00:17:12.936 iops : min= 128, max= 176, avg=134.80, stdev=11.69, samples=20 00:17:12.936 lat (msec) : 50=0.28%, 100=1.13%, 250=2.90%, 500=84.14%, 750=11.54% 00:17:12.936 cpu : usr=0.29%, sys=0.44%, ctx=1639, majf=0, minf=1 00:17:12.936 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:17:12.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.936 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:12.936 issued rwts: total=0,1412,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.936 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:12.936 job2: (groupid=0, jobs=1): err= 0: pid=87389: Fri Nov 8 02:21:13 2024 00:17:12.936 write: IOPS=146, BW=36.7MiB/s (38.5MB/s)(376MiB/10239msec); 0 zone resets 00:17:12.936 slat (usec): min=17, max=133357, avg=6647.07, stdev=12260.57 00:17:12.936 clat (msec): min=135, max=688, avg=428.83, stdev=52.03 00:17:12.936 lat (msec): min=135, max=688, avg=435.48, stdev=51.61 00:17:12.936 clat percentiles (msec): 00:17:12.936 | 1.00th=[ 178], 5.00th=[ 347], 10.00th=[ 405], 20.00th=[ 418], 00:17:12.936 | 30.00th=[ 426], 40.00th=[ 435], 50.00th=[ 439], 60.00th=[ 443], 00:17:12.936 | 70.00th=[ 447], 80.00th=[ 451], 90.00th=[ 456], 95.00th=[ 460], 00:17:12.936 | 99.00th=[ 575], 99.50th=[ 634], 99.90th=[ 693], 99.95th=[ 693], 00:17:12.936 | 99.99th=[ 693] 00:17:12.936 bw ( KiB/s): min=34816, max=38912, per=4.87%, avg=36889.60, stdev=1107.81, samples=20 00:17:12.936 iops : min= 136, max= 152, avg=144.10, stdev= 4.33, samples=20 00:17:12.936 lat (msec) : 250=2.19%, 500=96.08%, 750=1.73% 00:17:12.936 cpu : usr=0.28%, sys=0.47%, ctx=907, majf=0, minf=1 00:17:12.936 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:17:12.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.936 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:12.936 issued rwts: total=0,1504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.936 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:12.936 job3: (groupid=0, jobs=1): err= 0: pid=87390: Fri Nov 8 02:21:13 2024 00:17:12.937 write: IOPS=156, BW=39.2MiB/s (41.1MB/s)(401MiB/10246msec); 0 zone resets 00:17:12.937 slat (usec): min=21, max=30521, avg=6066.69, stdev=11201.93 00:17:12.937 clat (msec): min=18, max=688, avg=402.29, stdev=89.04 00:17:12.937 lat (msec): min=18, max=688, avg=408.36, stdev=90.10 00:17:12.937 clat percentiles (msec): 00:17:12.937 | 1.00th=[ 79], 5.00th=[ 188], 10.00th=[ 247], 20.00th=[ 405], 00:17:12.937 | 30.00th=[ 414], 40.00th=[ 422], 50.00th=[ 430], 60.00th=[ 439], 00:17:12.937 | 70.00th=[ 443], 80.00th=[ 447], 90.00th=[ 451], 95.00th=[ 456], 00:17:12.937 | 99.00th=[ 575], 99.50th=[ 634], 99.90th=[ 693], 99.95th=[ 693], 00:17:12.937 | 99.99th=[ 693] 00:17:12.937 bw ( KiB/s): min=34816, max=73728, per=5.21%, avg=39449.60, stdev=8276.57, samples=20 00:17:12.937 iops : min= 136, max= 288, avg=154.10, stdev=32.33, samples=20 00:17:12.937 lat (msec) : 20=0.25%, 50=0.25%, 100=1.25%, 250=8.29%, 500=88.35% 00:17:12.937 lat (msec) : 750=1.62% 00:17:12.937 cpu : usr=0.42%, sys=0.45%, ctx=1893, majf=0, minf=1 00:17:12.937 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:17:12.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.937 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:12.937 issued rwts: total=0,1605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.937 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:12.937 job4: (groupid=0, jobs=1): err= 0: pid=87391: Fri Nov 8 02:21:13 2024 00:17:12.937 write: IOPS=337, BW=84.3MiB/s (88.4MB/s)(855MiB/10144msec); 0 zone resets 00:17:12.937 slat (usec): min=16, max=33194, avg=2919.79, stdev=5072.38 00:17:12.937 clat (msec): min=31, max=316, avg=186.73, stdev=17.99 00:17:12.937 lat (msec): min=31, max=316, avg=189.65, stdev=17.55 00:17:12.937 clat percentiles (msec): 00:17:12.937 | 1.00th=[ 136], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:17:12.937 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 188], 00:17:12.937 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 194], 95.00th=[ 197], 00:17:12.937 | 99.00th=[ 264], 99.50th=[ 271], 99.90th=[ 305], 99.95th=[ 317], 00:17:12.937 | 99.99th=[ 317] 00:17:12.937 bw ( KiB/s): min=67584, max=88064, per=11.35%, avg=85964.80, stdev=4438.40, samples=20 00:17:12.937 iops : min= 264, max= 344, avg=335.80, stdev=17.34, samples=20 00:17:12.937 lat (msec) : 50=0.23%, 100=0.47%, 250=97.52%, 500=1.78% 00:17:12.937 cpu : usr=0.66%, sys=0.99%, ctx=4009, majf=0, minf=1 00:17:12.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:17:12.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:12.937 issued rwts: total=0,3421,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.937 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:12.937 job5: (groupid=0, jobs=1): err= 0: pid=87392: Fri Nov 8 02:21:13 2024 00:17:12.937 write: IOPS=335, BW=83.9MiB/s (88.0MB/s)(851MiB/10138msec); 0 zone resets 00:17:12.937 slat (usec): min=18, max=80884, avg=2931.54, stdev=5210.63 00:17:12.937 clat (msec): min=28, max=320, avg=187.60, stdev=20.56 00:17:12.937 lat (msec): min=28, max=320, avg=190.53, stdev=20.25 00:17:12.937 clat percentiles (msec): 00:17:12.937 | 1.00th=[ 138], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:17:12.937 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 188], 00:17:12.937 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 194], 95.00th=[ 199], 00:17:12.937 | 99.00th=[ 284], 99.50th=[ 305], 99.90th=[ 309], 99.95th=[ 321], 00:17:12.937 | 99.99th=[ 321] 00:17:12.937 bw ( KiB/s): min=61050, max=88064, per=11.29%, avg=85535.70, stdev=5898.95, samples=20 00:17:12.937 iops : min= 238, max= 344, avg=334.10, stdev=23.15, samples=20 00:17:12.937 lat (msec) : 50=0.21%, 100=0.35%, 250=96.86%, 500=2.59% 00:17:12.937 cpu : usr=0.67%, sys=0.99%, ctx=4353, majf=0, minf=1 00:17:12.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:17:12.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:12.937 issued rwts: total=0,3404,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.937 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:12.937 job6: (groupid=0, jobs=1): err= 0: pid=87393: Fri Nov 8 02:21:13 2024 00:17:12.937 write: IOPS=140, BW=35.2MiB/s (36.9MB/s)(360MiB/10237msec); 0 zone resets 00:17:12.937 slat (usec): min=16, max=182710, avg=6949.50, stdev=13568.27 00:17:12.937 clat (msec): min=184, max=676, avg=447.81, stdev=49.73 00:17:12.937 lat (msec): min=184, max=676, avg=454.76, stdev=48.98 00:17:12.937 clat percentiles (msec): 00:17:12.937 | 1.00th=[ 234], 5.00th=[ 388], 10.00th=[ 414], 20.00th=[ 430], 00:17:12.937 | 30.00th=[ 439], 40.00th=[ 443], 50.00th=[ 447], 60.00th=[ 456], 00:17:12.937 | 70.00th=[ 464], 80.00th=[ 481], 90.00th=[ 493], 95.00th=[ 506], 00:17:12.937 | 99.00th=[ 567], 99.50th=[ 617], 99.90th=[ 676], 99.95th=[ 676], 00:17:12.937 | 99.99th=[ 676] 00:17:12.937 bw ( KiB/s): min=30658, max=36864, per=4.65%, avg=35222.50, stdev=1956.16, samples=20 00:17:12.937 iops : min= 119, max= 144, avg=137.55, stdev= 7.74, samples=20 00:17:12.937 lat (msec) : 250=1.46%, 500=91.94%, 750=6.60% 00:17:12.937 cpu : usr=0.22%, sys=0.49%, ctx=1645, majf=0, minf=1 00:17:12.937 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:17:12.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.937 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:12.937 issued rwts: total=0,1440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.937 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:12.937 job7: (groupid=0, jobs=1): err= 0: pid=87394: Fri Nov 8 02:21:13 2024 00:17:12.937 write: IOPS=557, BW=139MiB/s (146MB/s)(1408MiB/10096msec); 0 zone resets 00:17:12.937 slat (usec): min=17, max=43874, avg=1770.91, stdev=3058.80 00:17:12.937 clat (msec): min=11, max=201, avg=112.96, stdev= 9.83 00:17:12.937 lat (msec): min=11, max=201, avg=114.73, stdev= 9.51 00:17:12.937 clat percentiles (msec): 00:17:12.937 | 1.00th=[ 97], 5.00th=[ 107], 10.00th=[ 108], 20.00th=[ 109], 00:17:12.937 | 30.00th=[ 112], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 115], 00:17:12.937 | 70.00th=[ 115], 80.00th=[ 116], 90.00th=[ 116], 95.00th=[ 117], 00:17:12.937 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 194], 99.95th=[ 194], 00:17:12.937 | 99.99th=[ 203] 00:17:12.937 bw ( KiB/s): min=126464, max=146432, per=18.81%, avg=142515.20, stdev=4025.58, samples=20 00:17:12.937 iops : min= 494, max= 572, avg=556.70, stdev=15.72, samples=20 00:17:12.937 lat (msec) : 20=0.12%, 50=0.28%, 100=0.66%, 250=98.93% 00:17:12.937 cpu : usr=0.97%, sys=1.68%, ctx=5170, majf=0, minf=1 00:17:12.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:17:12.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:12.937 issued rwts: total=0,5630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.937 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:12.937 job8: (groupid=0, jobs=1): err= 0: pid=87395: Fri Nov 8 02:21:13 2024 00:17:12.937 write: IOPS=555, BW=139MiB/s (146MB/s)(1402MiB/10097msec); 0 zone resets 00:17:12.937 slat (usec): min=17, max=93613, avg=1771.49, stdev=3231.40 00:17:12.937 clat (msec): min=90, max=202, avg=113.45, stdev= 7.80 00:17:12.937 lat (msec): min=96, max=202, avg=115.22, stdev= 7.24 00:17:12.937 clat percentiles (msec): 00:17:12.937 | 1.00th=[ 106], 5.00th=[ 107], 10.00th=[ 108], 20.00th=[ 109], 00:17:12.937 | 30.00th=[ 112], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 115], 00:17:12.937 | 70.00th=[ 115], 80.00th=[ 116], 90.00th=[ 116], 95.00th=[ 117], 00:17:12.937 | 99.00th=[ 155], 99.50th=[ 180], 99.90th=[ 201], 99.95th=[ 201], 00:17:12.937 | 99.99th=[ 203] 00:17:12.937 bw ( KiB/s): min=114176, max=145920, per=18.73%, avg=141900.80, stdev=6661.75, samples=20 00:17:12.937 iops : min= 446, max= 570, avg=554.30, stdev=26.02, samples=20 00:17:12.937 lat (msec) : 100=0.14%, 250=99.86% 00:17:12.937 cpu : usr=1.07%, sys=1.65%, ctx=7015, majf=0, minf=1 00:17:12.937 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:17:12.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.937 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:12.937 issued rwts: total=0,5606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.937 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:12.937 job9: (groupid=0, jobs=1): err= 0: pid=87396: Fri Nov 8 02:21:13 2024 00:17:12.937 write: IOPS=136, BW=34.2MiB/s (35.9MB/s)(350MiB/10231msec); 0 zone resets 00:17:12.937 slat (usec): min=16, max=231668, avg=7143.81, stdev=14573.02 00:17:12.937 clat (msec): min=220, max=676, avg=460.35, stdev=47.11 00:17:12.937 lat (msec): min=233, max=676, avg=467.49, stdev=46.00 00:17:12.937 clat percentiles (msec): 00:17:12.937 | 1.00th=[ 271], 5.00th=[ 409], 10.00th=[ 422], 20.00th=[ 439], 00:17:12.937 | 30.00th=[ 447], 40.00th=[ 451], 50.00th=[ 456], 60.00th=[ 472], 00:17:12.937 | 70.00th=[ 485], 80.00th=[ 493], 90.00th=[ 506], 95.00th=[ 514], 00:17:12.937 | 99.00th=[ 575], 99.50th=[ 617], 99.90th=[ 676], 99.95th=[ 676], 00:17:12.937 | 99.99th=[ 676] 00:17:12.937 bw ( KiB/s): min=24625, max=36864, per=4.52%, avg=34222.30, stdev=2819.24, samples=20 00:17:12.938 iops : min= 96, max= 144, avg=133.65, stdev=11.03, samples=20 00:17:12.938 lat (msec) : 250=0.71%, 500=86.50%, 750=12.79% 00:17:12.938 cpu : usr=0.20%, sys=0.48%, ctx=1432, majf=0, minf=1 00:17:12.938 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:17:12.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.938 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:12.938 issued rwts: total=0,1400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.938 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:12.938 job10: (groupid=0, jobs=1): err= 0: pid=87397: Fri Nov 8 02:21:13 2024 00:17:12.938 write: IOPS=333, BW=83.5MiB/s (87.5MB/s)(846MiB/10135msec); 0 zone resets 00:17:12.938 slat (usec): min=16, max=93759, avg=2948.62, stdev=5424.32 00:17:12.938 clat (msec): min=95, max=318, avg=188.66, stdev=20.64 00:17:12.938 lat (msec): min=95, max=318, avg=191.60, stdev=20.21 00:17:12.938 clat percentiles (msec): 00:17:12.938 | 1.00th=[ 159], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:17:12.938 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 188], 00:17:12.938 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 194], 95.00th=[ 197], 00:17:12.938 | 99.00th=[ 309], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 321], 00:17:12.938 | 99.99th=[ 321] 00:17:12.938 bw ( KiB/s): min=51200, max=88064, per=11.22%, avg=85000.05, stdev=8060.62, samples=20 00:17:12.938 iops : min= 200, max= 344, avg=332.00, stdev=31.48, samples=20 00:17:12.938 lat (msec) : 100=0.12%, 250=97.28%, 500=2.60% 00:17:12.938 cpu : usr=0.66%, sys=1.00%, ctx=3853, majf=0, minf=1 00:17:12.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:17:12.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:12.938 issued rwts: total=0,3384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.938 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:12.938 00:17:12.938 Run status group 0 (all jobs): 00:17:12.938 WRITE: bw=740MiB/s (776MB/s), 34.2MiB/s-139MiB/s (35.9MB/s-146MB/s), io=7581MiB (7949MB), run=10096-10246msec 00:17:12.938 00:17:12.938 Disk stats (read/write): 00:17:12.938 nvme0n1: ios=49/2908, merge=0/0, ticks=62/1195521, in_queue=1195583, util=97.98% 00:17:12.938 nvme10n1: ios=49/2820, merge=0/0, ticks=54/1239562, in_queue=1239616, util=98.10% 00:17:12.938 nvme1n1: ios=41/3007, merge=0/0, ticks=49/1241590, in_queue=1241639, util=98.21% 00:17:12.938 nvme2n1: ios=0/3209, merge=0/0, ticks=0/1243352, in_queue=1243352, util=98.15% 00:17:12.938 nvme3n1: ios=22/6707, merge=0/0, ticks=32/1211175, in_queue=1211207, util=98.16% 00:17:12.938 nvme4n1: ios=0/6679, merge=0/0, ticks=0/1211229, in_queue=1211229, util=98.32% 00:17:12.938 nvme5n1: ios=0/2875, merge=0/0, ticks=0/1240605, in_queue=1240605, util=98.43% 00:17:12.938 nvme6n1: ios=0/11115, merge=0/0, ticks=0/1214492, in_queue=1214492, util=98.46% 00:17:12.938 nvme7n1: ios=0/11067, merge=0/0, ticks=0/1214665, in_queue=1214665, util=98.71% 00:17:12.938 nvme8n1: ios=0/2795, merge=0/0, ticks=0/1239755, in_queue=1239755, util=98.83% 00:17:12.938 nvme9n1: ios=0/6635, merge=0/0, ticks=0/1210410, in_queue=1210410, util=98.89% 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:12.938 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:17:12.938 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:12.938 02:21:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:17:12.938 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:17:12.938 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:17:12.938 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:12.938 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:17:12.939 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:17:12.939 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:17:12.939 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:17:12.939 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:17:12.939 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:17:12.939 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:12.939 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:13.199 rmmod nvme_tcp 00:17:13.199 rmmod nvme_fabrics 00:17:13.199 rmmod nvme_keyring 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 86718 ']' 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 86718 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 86718 ']' 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 86718 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86718 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:13.199 killing process with pid 86718 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86718' 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 86718 00:17:13.199 02:21:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 86718 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.459 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:17:13.718 00:17:13.718 real 0m48.770s 00:17:13.718 user 2m48.307s 00:17:13.718 sys 0m24.150s 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:13.718 ************************************ 00:17:13.718 END TEST nvmf_multiconnection 00:17:13.718 ************************************ 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:13.718 ************************************ 00:17:13.718 START TEST nvmf_initiator_timeout 00:17:13.718 ************************************ 00:17:13.718 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:17:13.979 * Looking for test storage... 00:17:13.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.979 --rc genhtml_branch_coverage=1 00:17:13.979 --rc genhtml_function_coverage=1 00:17:13.979 --rc genhtml_legend=1 00:17:13.979 --rc geninfo_all_blocks=1 00:17:13.979 --rc geninfo_unexecuted_blocks=1 00:17:13.979 00:17:13.979 ' 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.979 --rc genhtml_branch_coverage=1 00:17:13.979 --rc genhtml_function_coverage=1 00:17:13.979 --rc genhtml_legend=1 00:17:13.979 --rc geninfo_all_blocks=1 00:17:13.979 --rc geninfo_unexecuted_blocks=1 00:17:13.979 00:17:13.979 ' 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.979 --rc genhtml_branch_coverage=1 00:17:13.979 --rc genhtml_function_coverage=1 00:17:13.979 --rc genhtml_legend=1 00:17:13.979 --rc geninfo_all_blocks=1 00:17:13.979 --rc geninfo_unexecuted_blocks=1 00:17:13.979 00:17:13.979 ' 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:13.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.979 --rc genhtml_branch_coverage=1 00:17:13.979 --rc genhtml_function_coverage=1 00:17:13.979 --rc genhtml_legend=1 00:17:13.979 --rc geninfo_all_blocks=1 00:17:13.979 --rc geninfo_unexecuted_blocks=1 00:17:13.979 00:17:13.979 ' 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:13.979 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:13.980 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:13.980 Cannot find device "nvmf_init_br" 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:13.980 Cannot find device "nvmf_init_br2" 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:13.980 Cannot find device "nvmf_tgt_br" 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.980 Cannot find device "nvmf_tgt_br2" 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:13.980 Cannot find device "nvmf_init_br" 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:13.980 Cannot find device "nvmf_init_br2" 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:17:13.980 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:14.239 Cannot find device "nvmf_tgt_br" 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:14.239 Cannot find device "nvmf_tgt_br2" 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:14.239 Cannot find device "nvmf_br" 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:14.239 Cannot find device "nvmf_init_if" 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:14.239 Cannot find device "nvmf_init_if2" 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:14.239 02:21:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:14.239 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:14.239 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:14.239 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:14.239 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:14.239 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:14.239 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:14.239 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:14.239 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:14.239 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:14.239 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:14.239 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:14.239 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:14.239 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:14.240 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:14.240 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:14.240 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:14.240 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:14.240 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:14.240 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:14.240 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:14.240 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:17:14.240 00:17:14.240 --- 10.0.0.3 ping statistics --- 00:17:14.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.240 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:14.240 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:14.240 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:14.240 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:17:14.240 00:17:14.240 --- 10.0.0.4 ping statistics --- 00:17:14.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.240 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:14.240 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:14.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:14.499 00:17:14.499 --- 10.0.0.1 ping statistics --- 00:17:14.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.499 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:14.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:17:14.499 00:17:14.499 --- 10.0.0.2 ping statistics --- 00:17:14.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.499 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # return 0 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=87826 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 87826 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 87826 ']' 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.499 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:14.499 [2024-11-08 02:21:16.216360] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:14.499 [2024-11-08 02:21:16.216439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.499 [2024-11-08 02:21:16.352151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.759 [2024-11-08 02:21:16.385950] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.759 [2024-11-08 02:21:16.386002] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.759 [2024-11-08 02:21:16.386028] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.759 [2024-11-08 02:21:16.386035] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.759 [2024-11-08 02:21:16.386041] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.759 [2024-11-08 02:21:16.386209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.759 [2024-11-08 02:21:16.386299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.759 [2024-11-08 02:21:16.386963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.759 [2024-11-08 02:21:16.386978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.759 [2024-11-08 02:21:16.415777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:14.759 Malloc0 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:14.759 Delay0 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:14.759 [2024-11-08 02:21:16.554805] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:14.759 [2024-11-08 02:21:16.582903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.759 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:15.032 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:17:15.032 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:17:15.032 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.032 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:15.032 02:21:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:17:16.997 02:21:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:16.997 02:21:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:16.997 02:21:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:16.997 02:21:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:16.997 02:21:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:16.997 02:21:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:17:16.997 02:21:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:17:16.997 02:21:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=87883 00:17:16.997 02:21:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:17:16.997 [global] 00:17:16.997 thread=1 00:17:16.997 invalidate=1 00:17:16.997 rw=write 00:17:16.997 time_based=1 00:17:16.997 runtime=60 00:17:16.997 ioengine=libaio 00:17:16.997 direct=1 00:17:16.997 bs=4096 00:17:16.997 iodepth=1 00:17:16.997 norandommap=0 00:17:16.997 numjobs=1 00:17:16.997 00:17:16.997 verify_dump=1 00:17:16.997 verify_backlog=512 00:17:16.997 verify_state_save=0 00:17:16.997 do_verify=1 00:17:16.997 verify=crc32c-intel 00:17:16.997 [job0] 00:17:16.997 filename=/dev/nvme0n1 00:17:16.997 Could not set queue depth (nvme0n1) 00:17:17.256 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:17.256 fio-3.35 00:17:17.256 Starting 1 thread 00:17:20.543 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:17:20.543 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.543 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:20.543 true 00:17:20.543 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.543 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:17:20.543 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.543 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:20.543 true 00:17:20.543 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.543 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:17:20.543 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.543 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:20.543 true 00:17:20.543 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.543 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:17:20.544 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.544 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:20.544 true 00:17:20.544 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.544 02:21:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:23.075 true 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:23.075 true 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:23.075 true 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:23.075 true 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:17:23.075 02:21:24 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 87883 00:18:19.314 00:18:19.314 job0: (groupid=0, jobs=1): err= 0: pid=87908: Fri Nov 8 02:22:19 2024 00:18:19.314 read: IOPS=827, BW=3311KiB/s (3390kB/s)(194MiB/60000msec) 00:18:19.314 slat (usec): min=10, max=121, avg=13.75, stdev= 4.20 00:18:19.314 clat (usec): min=153, max=669, avg=200.37, stdev=24.77 00:18:19.314 lat (usec): min=165, max=684, avg=214.12, stdev=25.54 00:18:19.314 clat percentiles (usec): 00:18:19.314 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:18:19.314 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:18:19.314 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 243], 00:18:19.314 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 338], 99.95th=[ 367], 00:18:19.314 | 99.99th=[ 562] 00:18:19.314 write: IOPS=833, BW=3333KiB/s (3413kB/s)(195MiB/60000msec); 0 zone resets 00:18:19.314 slat (usec): min=13, max=14023, avg=20.24, stdev=76.65 00:18:19.314 clat (usec): min=69, max=40430k, avg=964.10, stdev=180824.38 00:18:19.314 lat (usec): min=133, max=40430k, avg=984.34, stdev=180824.39 00:18:19.314 clat percentiles (usec): 00:18:19.314 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 129], 20.00th=[ 137], 00:18:19.314 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 159], 00:18:19.314 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 196], 00:18:19.314 | 99.00th=[ 219], 99.50th=[ 233], 99.90th=[ 269], 99.95th=[ 351], 00:18:19.314 | 99.99th=[ 1057] 00:18:19.314 bw ( KiB/s): min= 5416, max=12288, per=100.00%, avg=10004.15, stdev=1507.01, samples=39 00:18:19.314 iops : min= 1354, max= 3072, avg=2501.03, stdev=376.74, samples=39 00:18:19.314 lat (usec) : 100=0.01%, 250=98.24%, 500=1.75%, 750=0.01% 00:18:19.314 lat (msec) : 2=0.01%, >=2000=0.01% 00:18:19.314 cpu : usr=0.54%, sys=2.25%, ctx=99674, majf=0, minf=5 00:18:19.314 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:19.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.314 issued rwts: total=49664,49991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.314 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:19.314 00:18:19.314 Run status group 0 (all jobs): 00:18:19.314 READ: bw=3311KiB/s (3390kB/s), 3311KiB/s-3311KiB/s (3390kB/s-3390kB/s), io=194MiB (203MB), run=60000-60000msec 00:18:19.314 WRITE: bw=3333KiB/s (3413kB/s), 3333KiB/s-3333KiB/s (3413kB/s-3413kB/s), io=195MiB (205MB), run=60000-60000msec 00:18:19.314 00:18:19.314 Disk stats (read/write): 00:18:19.314 nvme0n1: ios=49835/49664, merge=0/0, ticks=10265/8182, in_queue=18447, util=99.83% 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:19.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:18:19.314 nvmf hotplug test: fio successful as expected 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:19.314 rmmod nvme_tcp 00:18:19.314 rmmod nvme_fabrics 00:18:19.314 rmmod nvme_keyring 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 87826 ']' 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 87826 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 87826 ']' 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 87826 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87826 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:19.314 killing process with pid 87826 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87826' 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 87826 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 87826 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:18:19.314 00:18:19.314 real 1m4.081s 00:18:19.314 user 3m50.786s 00:18:19.314 sys 0m21.236s 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:19.314 ************************************ 00:18:19.314 END TEST nvmf_initiator_timeout 00:18:19.314 ************************************ 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:18:19.314 02:22:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:19.314 00:18:19.314 real 6m50.448s 00:18:19.314 user 17m6.605s 00:18:19.314 sys 1m50.008s 00:18:19.315 02:22:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:19.315 02:22:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:19.315 ************************************ 00:18:19.315 END TEST nvmf_target_extra 00:18:19.315 ************************************ 00:18:19.315 02:22:19 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:19.315 02:22:19 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:19.315 02:22:19 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:19.315 02:22:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:19.315 ************************************ 00:18:19.315 START TEST nvmf_host 00:18:19.315 ************************************ 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:19.315 * Looking for test storage... 00:18:19.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:19.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.315 --rc genhtml_branch_coverage=1 00:18:19.315 --rc genhtml_function_coverage=1 00:18:19.315 --rc genhtml_legend=1 00:18:19.315 --rc geninfo_all_blocks=1 00:18:19.315 --rc geninfo_unexecuted_blocks=1 00:18:19.315 00:18:19.315 ' 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:19.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.315 --rc genhtml_branch_coverage=1 00:18:19.315 --rc genhtml_function_coverage=1 00:18:19.315 --rc genhtml_legend=1 00:18:19.315 --rc geninfo_all_blocks=1 00:18:19.315 --rc geninfo_unexecuted_blocks=1 00:18:19.315 00:18:19.315 ' 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:19.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.315 --rc genhtml_branch_coverage=1 00:18:19.315 --rc genhtml_function_coverage=1 00:18:19.315 --rc genhtml_legend=1 00:18:19.315 --rc geninfo_all_blocks=1 00:18:19.315 --rc geninfo_unexecuted_blocks=1 00:18:19.315 00:18:19.315 ' 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:19.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.315 --rc genhtml_branch_coverage=1 00:18:19.315 --rc genhtml_function_coverage=1 00:18:19.315 --rc genhtml_legend=1 00:18:19.315 --rc geninfo_all_blocks=1 00:18:19.315 --rc geninfo_unexecuted_blocks=1 00:18:19.315 00:18:19.315 ' 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:19.315 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.315 ************************************ 00:18:19.315 START TEST nvmf_identify 00:18:19.315 ************************************ 00:18:19.315 02:22:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:19.315 * Looking for test storage... 00:18:19.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:19.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.316 --rc genhtml_branch_coverage=1 00:18:19.316 --rc genhtml_function_coverage=1 00:18:19.316 --rc genhtml_legend=1 00:18:19.316 --rc geninfo_all_blocks=1 00:18:19.316 --rc geninfo_unexecuted_blocks=1 00:18:19.316 00:18:19.316 ' 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:19.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.316 --rc genhtml_branch_coverage=1 00:18:19.316 --rc genhtml_function_coverage=1 00:18:19.316 --rc genhtml_legend=1 00:18:19.316 --rc geninfo_all_blocks=1 00:18:19.316 --rc geninfo_unexecuted_blocks=1 00:18:19.316 00:18:19.316 ' 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:19.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.316 --rc genhtml_branch_coverage=1 00:18:19.316 --rc genhtml_function_coverage=1 00:18:19.316 --rc genhtml_legend=1 00:18:19.316 --rc geninfo_all_blocks=1 00:18:19.316 --rc geninfo_unexecuted_blocks=1 00:18:19.316 00:18:19.316 ' 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:19.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.316 --rc genhtml_branch_coverage=1 00:18:19.316 --rc genhtml_function_coverage=1 00:18:19.316 --rc genhtml_legend=1 00:18:19.316 --rc geninfo_all_blocks=1 00:18:19.316 --rc geninfo_unexecuted_blocks=1 00:18:19.316 00:18:19.316 ' 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:19.316 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.316 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:19.317 Cannot find device "nvmf_init_br" 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:19.317 Cannot find device "nvmf_init_br2" 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:19.317 Cannot find device "nvmf_tgt_br" 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.317 Cannot find device "nvmf_tgt_br2" 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:19.317 Cannot find device "nvmf_init_br" 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:19.317 Cannot find device "nvmf_init_br2" 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:19.317 Cannot find device "nvmf_tgt_br" 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:19.317 Cannot find device "nvmf_tgt_br2" 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:19.317 Cannot find device "nvmf_br" 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:19.317 Cannot find device "nvmf_init_if" 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:19.317 Cannot find device "nvmf_init_if2" 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.317 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.317 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:19.317 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:19.317 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:19.317 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:18:19.318 00:18:19.318 --- 10.0.0.3 ping statistics --- 00:18:19.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.318 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:19.318 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:19.318 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:18:19.318 00:18:19.318 --- 10.0.0.4 ping statistics --- 00:18:19.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.318 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:19.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:19.318 00:18:19.318 --- 10.0.0.1 ping statistics --- 00:18:19.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.318 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:19.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:18:19.318 00:18:19.318 --- 10.0.0.2 ping statistics --- 00:18:19.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.318 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=88842 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 88842 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 88842 ']' 00:18:19.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:19.318 [2024-11-08 02:22:20.584326] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:19.318 [2024-11-08 02:22:20.584426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.318 [2024-11-08 02:22:20.723694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.318 [2024-11-08 02:22:20.765799] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.318 [2024-11-08 02:22:20.765875] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.318 [2024-11-08 02:22:20.765899] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.318 [2024-11-08 02:22:20.765908] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.318 [2024-11-08 02:22:20.765917] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.318 [2024-11-08 02:22:20.766064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.318 [2024-11-08 02:22:20.766146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.318 [2024-11-08 02:22:20.766713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.318 [2024-11-08 02:22:20.766768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.318 [2024-11-08 02:22:20.799832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:19.318 [2024-11-08 02:22:20.863623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:19.318 Malloc0 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:19.318 [2024-11-08 02:22:20.951716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:19.318 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.319 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:19.319 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.319 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:19.319 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.319 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:19.319 [ 00:18:19.319 { 00:18:19.319 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:19.319 "subtype": "Discovery", 00:18:19.319 "listen_addresses": [ 00:18:19.319 { 00:18:19.319 "trtype": "TCP", 00:18:19.319 "adrfam": "IPv4", 00:18:19.319 "traddr": "10.0.0.3", 00:18:19.319 "trsvcid": "4420" 00:18:19.319 } 00:18:19.319 ], 00:18:19.319 "allow_any_host": true, 00:18:19.319 "hosts": [] 00:18:19.319 }, 00:18:19.319 { 00:18:19.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.319 "subtype": "NVMe", 00:18:19.319 "listen_addresses": [ 00:18:19.319 { 00:18:19.319 "trtype": "TCP", 00:18:19.319 "adrfam": "IPv4", 00:18:19.319 "traddr": "10.0.0.3", 00:18:19.319 "trsvcid": "4420" 00:18:19.319 } 00:18:19.319 ], 00:18:19.319 "allow_any_host": true, 00:18:19.319 "hosts": [], 00:18:19.319 "serial_number": "SPDK00000000000001", 00:18:19.319 "model_number": "SPDK bdev Controller", 00:18:19.319 "max_namespaces": 32, 00:18:19.319 "min_cntlid": 1, 00:18:19.319 "max_cntlid": 65519, 00:18:19.319 "namespaces": [ 00:18:19.319 { 00:18:19.319 "nsid": 1, 00:18:19.319 "bdev_name": "Malloc0", 00:18:19.319 "name": "Malloc0", 00:18:19.319 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:19.319 "eui64": "ABCDEF0123456789", 00:18:19.319 "uuid": "c918ba87-3d9a-4647-a89e-55220dc7ab60" 00:18:19.319 } 00:18:19.319 ] 00:18:19.319 } 00:18:19.319 ] 00:18:19.319 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.319 02:22:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:19.319 [2024-11-08 02:22:21.006984] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:19.319 [2024-11-08 02:22:21.007050] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88870 ] 00:18:19.319 [2024-11-08 02:22:21.154766] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:18:19.319 [2024-11-08 02:22:21.154862] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:19.319 [2024-11-08 02:22:21.154872] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:19.319 [2024-11-08 02:22:21.154885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:19.319 [2024-11-08 02:22:21.154896] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:19.319 [2024-11-08 02:22:21.155281] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:18:19.319 [2024-11-08 02:22:21.155366] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b81bd0 0 00:18:19.319 [2024-11-08 02:22:21.169134] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:19.319 [2024-11-08 02:22:21.169166] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:19.319 [2024-11-08 02:22:21.169174] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:19.319 [2024-11-08 02:22:21.169179] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:19.319 [2024-11-08 02:22:21.169223] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.169232] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.169238] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b81bd0) 00:18:19.319 [2024-11-08 02:22:21.169255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:19.319 [2024-11-08 02:22:21.169292] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc80c0, cid 0, qid 0 00:18:19.319 [2024-11-08 02:22:21.177196] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.319 [2024-11-08 02:22:21.177218] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.319 [2024-11-08 02:22:21.177240] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177245] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc80c0) on tqpair=0x1b81bd0 00:18:19.319 [2024-11-08 02:22:21.177256] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:19.319 [2024-11-08 02:22:21.177264] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:18:19.319 [2024-11-08 02:22:21.177271] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:18:19.319 [2024-11-08 02:22:21.177289] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177295] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177299] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b81bd0) 00:18:19.319 [2024-11-08 02:22:21.177309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.319 [2024-11-08 02:22:21.177338] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc80c0, cid 0, qid 0 00:18:19.319 [2024-11-08 02:22:21.177398] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.319 [2024-11-08 02:22:21.177405] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.319 [2024-11-08 02:22:21.177409] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177414] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc80c0) on tqpair=0x1b81bd0 00:18:19.319 [2024-11-08 02:22:21.177420] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:18:19.319 [2024-11-08 02:22:21.177428] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:18:19.319 [2024-11-08 02:22:21.177436] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177441] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177445] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b81bd0) 00:18:19.319 [2024-11-08 02:22:21.177453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.319 [2024-11-08 02:22:21.177487] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc80c0, cid 0, qid 0 00:18:19.319 [2024-11-08 02:22:21.177537] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.319 [2024-11-08 02:22:21.177545] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.319 [2024-11-08 02:22:21.177549] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177553] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc80c0) on tqpair=0x1b81bd0 00:18:19.319 [2024-11-08 02:22:21.177574] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:18:19.319 [2024-11-08 02:22:21.177583] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:18:19.319 [2024-11-08 02:22:21.177591] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177595] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177599] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b81bd0) 00:18:19.319 [2024-11-08 02:22:21.177607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.319 [2024-11-08 02:22:21.177624] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc80c0, cid 0, qid 0 00:18:19.319 [2024-11-08 02:22:21.177671] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.319 [2024-11-08 02:22:21.177678] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.319 [2024-11-08 02:22:21.177682] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177687] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc80c0) on tqpair=0x1b81bd0 00:18:19.319 [2024-11-08 02:22:21.177693] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:19.319 [2024-11-08 02:22:21.177703] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177708] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177712] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b81bd0) 00:18:19.319 [2024-11-08 02:22:21.177720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.319 [2024-11-08 02:22:21.177737] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc80c0, cid 0, qid 0 00:18:19.319 [2024-11-08 02:22:21.177780] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.319 [2024-11-08 02:22:21.177788] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.319 [2024-11-08 02:22:21.177792] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177796] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc80c0) on tqpair=0x1b81bd0 00:18:19.319 [2024-11-08 02:22:21.177801] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:18:19.319 [2024-11-08 02:22:21.177807] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:18:19.319 [2024-11-08 02:22:21.177815] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:19.319 [2024-11-08 02:22:21.177921] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:18:19.319 [2024-11-08 02:22:21.177927] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:19.319 [2024-11-08 02:22:21.177936] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177941] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.319 [2024-11-08 02:22:21.177945] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b81bd0) 00:18:19.319 [2024-11-08 02:22:21.177952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.319 [2024-11-08 02:22:21.177970] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc80c0, cid 0, qid 0 00:18:19.319 [2024-11-08 02:22:21.178015] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.320 [2024-11-08 02:22:21.178022] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.320 [2024-11-08 02:22:21.178026] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178030] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc80c0) on tqpair=0x1b81bd0 00:18:19.320 [2024-11-08 02:22:21.178036] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:19.320 [2024-11-08 02:22:21.178046] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178051] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178055] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b81bd0) 00:18:19.320 [2024-11-08 02:22:21.178062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.320 [2024-11-08 02:22:21.178079] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc80c0, cid 0, qid 0 00:18:19.320 [2024-11-08 02:22:21.178125] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.320 [2024-11-08 02:22:21.178132] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.320 [2024-11-08 02:22:21.178136] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178141] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc80c0) on tqpair=0x1b81bd0 00:18:19.320 [2024-11-08 02:22:21.178159] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:19.320 [2024-11-08 02:22:21.178166] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:18:19.320 [2024-11-08 02:22:21.178192] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:18:19.320 [2024-11-08 02:22:21.178207] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:18:19.320 [2024-11-08 02:22:21.178218] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178222] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b81bd0) 00:18:19.320 [2024-11-08 02:22:21.178231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.320 [2024-11-08 02:22:21.178253] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc80c0, cid 0, qid 0 00:18:19.320 [2024-11-08 02:22:21.178345] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:19.320 [2024-11-08 02:22:21.178353] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:19.320 [2024-11-08 02:22:21.178357] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178362] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b81bd0): datao=0, datal=4096, cccid=0 00:18:19.320 [2024-11-08 02:22:21.178367] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bc80c0) on tqpair(0x1b81bd0): expected_datao=0, payload_size=4096 00:18:19.320 [2024-11-08 02:22:21.178373] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178381] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178386] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178396] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.320 [2024-11-08 02:22:21.178402] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.320 [2024-11-08 02:22:21.178407] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178411] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc80c0) on tqpair=0x1b81bd0 00:18:19.320 [2024-11-08 02:22:21.178420] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:18:19.320 [2024-11-08 02:22:21.178426] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:18:19.320 [2024-11-08 02:22:21.178432] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:18:19.320 [2024-11-08 02:22:21.178438] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:18:19.320 [2024-11-08 02:22:21.178443] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:18:19.320 [2024-11-08 02:22:21.178449] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:18:19.320 [2024-11-08 02:22:21.178458] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:18:19.320 [2024-11-08 02:22:21.178466] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178471] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178475] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b81bd0) 00:18:19.320 [2024-11-08 02:22:21.178483] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.320 [2024-11-08 02:22:21.178518] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc80c0, cid 0, qid 0 00:18:19.320 [2024-11-08 02:22:21.178571] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.320 [2024-11-08 02:22:21.178578] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.320 [2024-11-08 02:22:21.178582] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178587] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc80c0) on tqpair=0x1b81bd0 00:18:19.320 [2024-11-08 02:22:21.178595] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178599] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178603] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b81bd0) 00:18:19.320 [2024-11-08 02:22:21.178610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.320 [2024-11-08 02:22:21.178617] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178621] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178625] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b81bd0) 00:18:19.320 [2024-11-08 02:22:21.178632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.320 [2024-11-08 02:22:21.178638] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178642] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178646] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b81bd0) 00:18:19.320 [2024-11-08 02:22:21.178652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.320 [2024-11-08 02:22:21.178659] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178663] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178667] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b81bd0) 00:18:19.320 [2024-11-08 02:22:21.178674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.320 [2024-11-08 02:22:21.178679] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:18:19.320 [2024-11-08 02:22:21.178693] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:19.320 [2024-11-08 02:22:21.178700] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178705] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b81bd0) 00:18:19.320 [2024-11-08 02:22:21.178712] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.320 [2024-11-08 02:22:21.178733] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc80c0, cid 0, qid 0 00:18:19.320 [2024-11-08 02:22:21.178739] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8240, cid 1, qid 0 00:18:19.320 [2024-11-08 02:22:21.178745] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc83c0, cid 2, qid 0 00:18:19.320 [2024-11-08 02:22:21.178750] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8540, cid 3, qid 0 00:18:19.320 [2024-11-08 02:22:21.178755] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc86c0, cid 4, qid 0 00:18:19.320 [2024-11-08 02:22:21.178877] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.320 [2024-11-08 02:22:21.178885] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.320 [2024-11-08 02:22:21.178889] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178894] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc86c0) on tqpair=0x1b81bd0 00:18:19.320 [2024-11-08 02:22:21.178900] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:18:19.320 [2024-11-08 02:22:21.178906] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:18:19.320 [2024-11-08 02:22:21.178918] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.178923] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b81bd0) 00:18:19.320 [2024-11-08 02:22:21.178930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.320 [2024-11-08 02:22:21.178950] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc86c0, cid 4, qid 0 00:18:19.320 [2024-11-08 02:22:21.179008] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:19.320 [2024-11-08 02:22:21.179016] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:19.320 [2024-11-08 02:22:21.179020] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.179024] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b81bd0): datao=0, datal=4096, cccid=4 00:18:19.320 [2024-11-08 02:22:21.179029] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bc86c0) on tqpair(0x1b81bd0): expected_datao=0, payload_size=4096 00:18:19.320 [2024-11-08 02:22:21.179034] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.179042] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.179047] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.179055] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.320 [2024-11-08 02:22:21.179062] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.320 [2024-11-08 02:22:21.179066] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.179071] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc86c0) on tqpair=0x1b81bd0 00:18:19.320 [2024-11-08 02:22:21.179085] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:18:19.320 [2024-11-08 02:22:21.179124] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.320 [2024-11-08 02:22:21.179132] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b81bd0) 00:18:19.321 [2024-11-08 02:22:21.179140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.321 [2024-11-08 02:22:21.179149] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179153] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179157] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b81bd0) 00:18:19.321 [2024-11-08 02:22:21.179164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.321 [2024-11-08 02:22:21.179202] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc86c0, cid 4, qid 0 00:18:19.321 [2024-11-08 02:22:21.179209] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8840, cid 5, qid 0 00:18:19.321 [2024-11-08 02:22:21.179303] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:19.321 [2024-11-08 02:22:21.179310] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:19.321 [2024-11-08 02:22:21.179314] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179318] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b81bd0): datao=0, datal=1024, cccid=4 00:18:19.321 [2024-11-08 02:22:21.179324] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bc86c0) on tqpair(0x1b81bd0): expected_datao=0, payload_size=1024 00:18:19.321 [2024-11-08 02:22:21.179328] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179335] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179340] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179346] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.321 [2024-11-08 02:22:21.179352] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.321 [2024-11-08 02:22:21.179356] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179361] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8840) on tqpair=0x1b81bd0 00:18:19.321 [2024-11-08 02:22:21.179378] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.321 [2024-11-08 02:22:21.179386] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.321 [2024-11-08 02:22:21.179390] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179395] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc86c0) on tqpair=0x1b81bd0 00:18:19.321 [2024-11-08 02:22:21.179406] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179411] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b81bd0) 00:18:19.321 [2024-11-08 02:22:21.179419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.321 [2024-11-08 02:22:21.179442] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc86c0, cid 4, qid 0 00:18:19.321 [2024-11-08 02:22:21.179509] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:19.321 [2024-11-08 02:22:21.179516] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:19.321 [2024-11-08 02:22:21.179520] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179524] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b81bd0): datao=0, datal=3072, cccid=4 00:18:19.321 [2024-11-08 02:22:21.179529] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bc86c0) on tqpair(0x1b81bd0): expected_datao=0, payload_size=3072 00:18:19.321 [2024-11-08 02:22:21.179534] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179541] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179546] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179554] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.321 [2024-11-08 02:22:21.179561] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.321 [2024-11-08 02:22:21.179565] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179569] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc86c0) on tqpair=0x1b81bd0 00:18:19.321 [2024-11-08 02:22:21.179578] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179584] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b81bd0) 00:18:19.321 [2024-11-08 02:22:21.179591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.321 [2024-11-08 02:22:21.179614] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc86c0, cid 4, qid 0 00:18:19.321 [2024-11-08 02:22:21.179677] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:19.321 [2024-11-08 02:22:21.179684] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:19.321 [2024-11-08 02:22:21.179688] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179692] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b81bd0): datao=0, datal=8, cccid=4 00:18:19.321 [2024-11-08 02:22:21.179697] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bc86c0) on tqpair(0x1b81bd0): expected_datao=0, payload_size=8 00:18:19.321 [2024-11-08 02:22:21.179702] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179708] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179712] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179727] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.321 [2024-11-08 02:22:21.179735] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.321 [2024-11-08 02:22:21.179739] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.321 [2024-11-08 02:22:21.179743] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc86c0) on tqpair=0x1b81bd0 00:18:19.321 ===================================================== 00:18:19.321 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:19.321 ===================================================== 00:18:19.321 Controller Capabilities/Features 00:18:19.321 ================================ 00:18:19.321 Vendor ID: 0000 00:18:19.321 Subsystem Vendor ID: 0000 00:18:19.321 Serial Number: .................... 00:18:19.321 Model Number: ........................................ 00:18:19.321 Firmware Version: 24.09.1 00:18:19.321 Recommended Arb Burst: 0 00:18:19.321 IEEE OUI Identifier: 00 00 00 00:18:19.321 Multi-path I/O 00:18:19.321 May have multiple subsystem ports: No 00:18:19.321 May have multiple controllers: No 00:18:19.321 Associated with SR-IOV VF: No 00:18:19.321 Max Data Transfer Size: 131072 00:18:19.321 Max Number of Namespaces: 0 00:18:19.321 Max Number of I/O Queues: 1024 00:18:19.321 NVMe Specification Version (VS): 1.3 00:18:19.321 NVMe Specification Version (Identify): 1.3 00:18:19.321 Maximum Queue Entries: 128 00:18:19.321 Contiguous Queues Required: Yes 00:18:19.321 Arbitration Mechanisms Supported 00:18:19.321 Weighted Round Robin: Not Supported 00:18:19.321 Vendor Specific: Not Supported 00:18:19.321 Reset Timeout: 15000 ms 00:18:19.321 Doorbell Stride: 4 bytes 00:18:19.321 NVM Subsystem Reset: Not Supported 00:18:19.321 Command Sets Supported 00:18:19.321 NVM Command Set: Supported 00:18:19.321 Boot Partition: Not Supported 00:18:19.321 Memory Page Size Minimum: 4096 bytes 00:18:19.321 Memory Page Size Maximum: 4096 bytes 00:18:19.321 Persistent Memory Region: Not Supported 00:18:19.321 Optional Asynchronous Events Supported 00:18:19.321 Namespace Attribute Notices: Not Supported 00:18:19.321 Firmware Activation Notices: Not Supported 00:18:19.321 ANA Change Notices: Not Supported 00:18:19.321 PLE Aggregate Log Change Notices: Not Supported 00:18:19.321 LBA Status Info Alert Notices: Not Supported 00:18:19.321 EGE Aggregate Log Change Notices: Not Supported 00:18:19.321 Normal NVM Subsystem Shutdown event: Not Supported 00:18:19.321 Zone Descriptor Change Notices: Not Supported 00:18:19.321 Discovery Log Change Notices: Supported 00:18:19.321 Controller Attributes 00:18:19.321 128-bit Host Identifier: Not Supported 00:18:19.321 Non-Operational Permissive Mode: Not Supported 00:18:19.321 NVM Sets: Not Supported 00:18:19.321 Read Recovery Levels: Not Supported 00:18:19.321 Endurance Groups: Not Supported 00:18:19.321 Predictable Latency Mode: Not Supported 00:18:19.321 Traffic Based Keep ALive: Not Supported 00:18:19.321 Namespace Granularity: Not Supported 00:18:19.321 SQ Associations: Not Supported 00:18:19.321 UUID List: Not Supported 00:18:19.321 Multi-Domain Subsystem: Not Supported 00:18:19.321 Fixed Capacity Management: Not Supported 00:18:19.321 Variable Capacity Management: Not Supported 00:18:19.321 Delete Endurance Group: Not Supported 00:18:19.321 Delete NVM Set: Not Supported 00:18:19.321 Extended LBA Formats Supported: Not Supported 00:18:19.321 Flexible Data Placement Supported: Not Supported 00:18:19.321 00:18:19.321 Controller Memory Buffer Support 00:18:19.321 ================================ 00:18:19.321 Supported: No 00:18:19.321 00:18:19.321 Persistent Memory Region Support 00:18:19.321 ================================ 00:18:19.321 Supported: No 00:18:19.321 00:18:19.321 Admin Command Set Attributes 00:18:19.321 ============================ 00:18:19.321 Security Send/Receive: Not Supported 00:18:19.321 Format NVM: Not Supported 00:18:19.321 Firmware Activate/Download: Not Supported 00:18:19.321 Namespace Management: Not Supported 00:18:19.321 Device Self-Test: Not Supported 00:18:19.321 Directives: Not Supported 00:18:19.321 NVMe-MI: Not Supported 00:18:19.321 Virtualization Management: Not Supported 00:18:19.321 Doorbell Buffer Config: Not Supported 00:18:19.321 Get LBA Status Capability: Not Supported 00:18:19.321 Command & Feature Lockdown Capability: Not Supported 00:18:19.321 Abort Command Limit: 1 00:18:19.321 Async Event Request Limit: 4 00:18:19.321 Number of Firmware Slots: N/A 00:18:19.321 Firmware Slot 1 Read-Only: N/A 00:18:19.321 Firmware Activation Without Reset: N/A 00:18:19.321 Multiple Update Detection Support: N/A 00:18:19.321 Firmware Update Granularity: No Information Provided 00:18:19.321 Per-Namespace SMART Log: No 00:18:19.321 Asymmetric Namespace Access Log Page: Not Supported 00:18:19.322 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:19.322 Command Effects Log Page: Not Supported 00:18:19.322 Get Log Page Extended Data: Supported 00:18:19.322 Telemetry Log Pages: Not Supported 00:18:19.322 Persistent Event Log Pages: Not Supported 00:18:19.322 Supported Log Pages Log Page: May Support 00:18:19.322 Commands Supported & Effects Log Page: Not Supported 00:18:19.322 Feature Identifiers & Effects Log Page:May Support 00:18:19.322 NVMe-MI Commands & Effects Log Page: May Support 00:18:19.322 Data Area 4 for Telemetry Log: Not Supported 00:18:19.322 Error Log Page Entries Supported: 128 00:18:19.322 Keep Alive: Not Supported 00:18:19.322 00:18:19.322 NVM Command Set Attributes 00:18:19.322 ========================== 00:18:19.322 Submission Queue Entry Size 00:18:19.322 Max: 1 00:18:19.322 Min: 1 00:18:19.322 Completion Queue Entry Size 00:18:19.322 Max: 1 00:18:19.322 Min: 1 00:18:19.322 Number of Namespaces: 0 00:18:19.322 Compare Command: Not Supported 00:18:19.322 Write Uncorrectable Command: Not Supported 00:18:19.322 Dataset Management Command: Not Supported 00:18:19.322 Write Zeroes Command: Not Supported 00:18:19.322 Set Features Save Field: Not Supported 00:18:19.322 Reservations: Not Supported 00:18:19.322 Timestamp: Not Supported 00:18:19.322 Copy: Not Supported 00:18:19.322 Volatile Write Cache: Not Present 00:18:19.322 Atomic Write Unit (Normal): 1 00:18:19.322 Atomic Write Unit (PFail): 1 00:18:19.322 Atomic Compare & Write Unit: 1 00:18:19.322 Fused Compare & Write: Supported 00:18:19.322 Scatter-Gather List 00:18:19.322 SGL Command Set: Supported 00:18:19.322 SGL Keyed: Supported 00:18:19.322 SGL Bit Bucket Descriptor: Not Supported 00:18:19.322 SGL Metadata Pointer: Not Supported 00:18:19.322 Oversized SGL: Not Supported 00:18:19.322 SGL Metadata Address: Not Supported 00:18:19.322 SGL Offset: Supported 00:18:19.322 Transport SGL Data Block: Not Supported 00:18:19.322 Replay Protected Memory Block: Not Supported 00:18:19.322 00:18:19.322 Firmware Slot Information 00:18:19.322 ========================= 00:18:19.322 Active slot: 0 00:18:19.322 00:18:19.322 00:18:19.322 Error Log 00:18:19.322 ========= 00:18:19.322 00:18:19.322 Active Namespaces 00:18:19.322 ================= 00:18:19.322 Discovery Log Page 00:18:19.322 ================== 00:18:19.322 Generation Counter: 2 00:18:19.322 Number of Records: 2 00:18:19.322 Record Format: 0 00:18:19.322 00:18:19.322 Discovery Log Entry 0 00:18:19.322 ---------------------- 00:18:19.322 Transport Type: 3 (TCP) 00:18:19.322 Address Family: 1 (IPv4) 00:18:19.322 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:19.322 Entry Flags: 00:18:19.322 Duplicate Returned Information: 1 00:18:19.322 Explicit Persistent Connection Support for Discovery: 1 00:18:19.322 Transport Requirements: 00:18:19.322 Secure Channel: Not Required 00:18:19.322 Port ID: 0 (0x0000) 00:18:19.322 Controller ID: 65535 (0xffff) 00:18:19.322 Admin Max SQ Size: 128 00:18:19.322 Transport Service Identifier: 4420 00:18:19.322 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:19.322 Transport Address: 10.0.0.3 00:18:19.322 Discovery Log Entry 1 00:18:19.322 ---------------------- 00:18:19.322 Transport Type: 3 (TCP) 00:18:19.322 Address Family: 1 (IPv4) 00:18:19.322 Subsystem Type: 2 (NVM Subsystem) 00:18:19.322 Entry Flags: 00:18:19.322 Duplicate Returned Information: 0 00:18:19.322 Explicit Persistent Connection Support for Discovery: 0 00:18:19.322 Transport Requirements: 00:18:19.322 Secure Channel: Not Required 00:18:19.322 Port ID: 0 (0x0000) 00:18:19.322 Controller ID: 65535 (0xffff) 00:18:19.322 Admin Max SQ Size: 128 00:18:19.322 Transport Service Identifier: 4420 00:18:19.322 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:19.322 Transport Address: 10.0.0.3 [2024-11-08 02:22:21.179847] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:18:19.322 [2024-11-08 02:22:21.179861] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc80c0) on tqpair=0x1b81bd0 00:18:19.322 [2024-11-08 02:22:21.179869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.322 [2024-11-08 02:22:21.179875] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8240) on tqpair=0x1b81bd0 00:18:19.322 [2024-11-08 02:22:21.179881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.322 [2024-11-08 02:22:21.179886] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc83c0) on tqpair=0x1b81bd0 00:18:19.322 [2024-11-08 02:22:21.179891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.322 [2024-11-08 02:22:21.179897] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8540) on tqpair=0x1b81bd0 00:18:19.322 [2024-11-08 02:22:21.179902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.322 [2024-11-08 02:22:21.179912] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.179917] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.179921] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b81bd0) 00:18:19.322 [2024-11-08 02:22:21.179929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.322 [2024-11-08 02:22:21.179952] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8540, cid 3, qid 0 00:18:19.322 [2024-11-08 02:22:21.180000] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.322 [2024-11-08 02:22:21.180008] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.322 [2024-11-08 02:22:21.180012] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.180016] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8540) on tqpair=0x1b81bd0 00:18:19.322 [2024-11-08 02:22:21.180025] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.180029] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.180033] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b81bd0) 00:18:19.322 [2024-11-08 02:22:21.180041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.322 [2024-11-08 02:22:21.180063] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8540, cid 3, qid 0 00:18:19.322 [2024-11-08 02:22:21.180146] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.322 [2024-11-08 02:22:21.180156] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.322 [2024-11-08 02:22:21.180160] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.180165] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8540) on tqpair=0x1b81bd0 00:18:19.322 [2024-11-08 02:22:21.180170] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:18:19.322 [2024-11-08 02:22:21.180180] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:18:19.322 [2024-11-08 02:22:21.180193] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.180198] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.180202] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b81bd0) 00:18:19.322 [2024-11-08 02:22:21.180210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.322 [2024-11-08 02:22:21.180232] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8540, cid 3, qid 0 00:18:19.322 [2024-11-08 02:22:21.180280] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.322 [2024-11-08 02:22:21.180288] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.322 [2024-11-08 02:22:21.180292] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.180296] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8540) on tqpair=0x1b81bd0 00:18:19.322 [2024-11-08 02:22:21.180308] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.180313] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.180317] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b81bd0) 00:18:19.322 [2024-11-08 02:22:21.180334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.322 [2024-11-08 02:22:21.180351] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8540, cid 3, qid 0 00:18:19.322 [2024-11-08 02:22:21.180400] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.322 [2024-11-08 02:22:21.180407] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.322 [2024-11-08 02:22:21.180411] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.180416] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8540) on tqpair=0x1b81bd0 00:18:19.322 [2024-11-08 02:22:21.180427] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.180432] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.322 [2024-11-08 02:22:21.180436] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b81bd0) 00:18:19.322 [2024-11-08 02:22:21.180444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.323 [2024-11-08 02:22:21.180476] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8540, cid 3, qid 0 00:18:19.323 [2024-11-08 02:22:21.180518] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.323 [2024-11-08 02:22:21.180525] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.323 [2024-11-08 02:22:21.180529] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180533] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8540) on tqpair=0x1b81bd0 00:18:19.323 [2024-11-08 02:22:21.180544] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180549] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180553] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b81bd0) 00:18:19.323 [2024-11-08 02:22:21.180560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.323 [2024-11-08 02:22:21.180576] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8540, cid 3, qid 0 00:18:19.323 [2024-11-08 02:22:21.180620] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.323 [2024-11-08 02:22:21.180627] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.323 [2024-11-08 02:22:21.180631] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180636] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8540) on tqpair=0x1b81bd0 00:18:19.323 [2024-11-08 02:22:21.180646] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180651] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180655] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b81bd0) 00:18:19.323 [2024-11-08 02:22:21.180663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.323 [2024-11-08 02:22:21.180679] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8540, cid 3, qid 0 00:18:19.323 [2024-11-08 02:22:21.180723] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.323 [2024-11-08 02:22:21.180730] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.323 [2024-11-08 02:22:21.180734] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180738] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8540) on tqpair=0x1b81bd0 00:18:19.323 [2024-11-08 02:22:21.180749] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180754] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180758] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b81bd0) 00:18:19.323 [2024-11-08 02:22:21.180766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.323 [2024-11-08 02:22:21.180782] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8540, cid 3, qid 0 00:18:19.323 [2024-11-08 02:22:21.180824] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.323 [2024-11-08 02:22:21.180831] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.323 [2024-11-08 02:22:21.180835] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180840] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8540) on tqpair=0x1b81bd0 00:18:19.323 [2024-11-08 02:22:21.180850] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180855] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180859] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b81bd0) 00:18:19.323 [2024-11-08 02:22:21.180866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.323 [2024-11-08 02:22:21.180883] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8540, cid 3, qid 0 00:18:19.323 [2024-11-08 02:22:21.180928] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.323 [2024-11-08 02:22:21.180935] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.323 [2024-11-08 02:22:21.180939] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180943] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8540) on tqpair=0x1b81bd0 00:18:19.323 [2024-11-08 02:22:21.180954] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180959] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.180963] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b81bd0) 00:18:19.323 [2024-11-08 02:22:21.180970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.323 [2024-11-08 02:22:21.180986] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8540, cid 3, qid 0 00:18:19.323 [2024-11-08 02:22:21.181039] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.323 [2024-11-08 02:22:21.181046] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.323 [2024-11-08 02:22:21.181050] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.181054] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8540) on tqpair=0x1b81bd0 00:18:19.323 [2024-11-08 02:22:21.181064] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.181069] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.181073] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b81bd0) 00:18:19.323 [2024-11-08 02:22:21.181081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.323 [2024-11-08 02:22:21.181097] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8540, cid 3, qid 0 00:18:19.323 [2024-11-08 02:22:21.184209] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.323 [2024-11-08 02:22:21.184225] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.323 [2024-11-08 02:22:21.184229] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.184234] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8540) on tqpair=0x1b81bd0 00:18:19.323 [2024-11-08 02:22:21.184248] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.184253] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.184257] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b81bd0) 00:18:19.323 [2024-11-08 02:22:21.184266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.323 [2024-11-08 02:22:21.184291] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bc8540, cid 3, qid 0 00:18:19.323 [2024-11-08 02:22:21.184343] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.323 [2024-11-08 02:22:21.184350] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.323 [2024-11-08 02:22:21.184353] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.323 [2024-11-08 02:22:21.184358] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bc8540) on tqpair=0x1b81bd0 00:18:19.323 [2024-11-08 02:22:21.184366] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:18:19.587 00:18:19.587 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:19.587 [2024-11-08 02:22:21.229615] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:19.587 [2024-11-08 02:22:21.229665] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88872 ] 00:18:19.587 [2024-11-08 02:22:21.366654] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:18:19.587 [2024-11-08 02:22:21.366725] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:19.587 [2024-11-08 02:22:21.366733] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:19.587 [2024-11-08 02:22:21.366742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:19.587 [2024-11-08 02:22:21.366750] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:19.587 [2024-11-08 02:22:21.367040] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:18:19.587 [2024-11-08 02:22:21.367114] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8b0bd0 0 00:18:19.587 [2024-11-08 02:22:21.372141] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:19.587 [2024-11-08 02:22:21.372181] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:19.587 [2024-11-08 02:22:21.372202] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:19.587 [2024-11-08 02:22:21.372206] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:19.587 [2024-11-08 02:22:21.372239] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.372245] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.372249] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0bd0) 00:18:19.587 [2024-11-08 02:22:21.372260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:19.587 [2024-11-08 02:22:21.372291] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f70c0, cid 0, qid 0 00:18:19.587 [2024-11-08 02:22:21.380251] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.587 [2024-11-08 02:22:21.380275] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.587 [2024-11-08 02:22:21.380280] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.380286] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f70c0) on tqpair=0x8b0bd0 00:18:19.587 [2024-11-08 02:22:21.380297] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:19.587 [2024-11-08 02:22:21.380305] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:18:19.587 [2024-11-08 02:22:21.380312] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:18:19.587 [2024-11-08 02:22:21.380329] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.380335] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.380340] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0bd0) 00:18:19.587 [2024-11-08 02:22:21.380350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.587 [2024-11-08 02:22:21.380378] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f70c0, cid 0, qid 0 00:18:19.587 [2024-11-08 02:22:21.380449] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.587 [2024-11-08 02:22:21.380457] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.587 [2024-11-08 02:22:21.380461] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.380480] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f70c0) on tqpair=0x8b0bd0 00:18:19.587 [2024-11-08 02:22:21.380486] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:18:19.587 [2024-11-08 02:22:21.380510] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:18:19.587 [2024-11-08 02:22:21.380533] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.380553] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.380557] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0bd0) 00:18:19.587 [2024-11-08 02:22:21.380581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.587 [2024-11-08 02:22:21.380601] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f70c0, cid 0, qid 0 00:18:19.587 [2024-11-08 02:22:21.380650] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.587 [2024-11-08 02:22:21.380657] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.587 [2024-11-08 02:22:21.380661] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.380666] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f70c0) on tqpair=0x8b0bd0 00:18:19.587 [2024-11-08 02:22:21.380672] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:18:19.587 [2024-11-08 02:22:21.380681] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:18:19.587 [2024-11-08 02:22:21.380689] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.380694] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.380698] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0bd0) 00:18:19.587 [2024-11-08 02:22:21.380706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.587 [2024-11-08 02:22:21.380725] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f70c0, cid 0, qid 0 00:18:19.587 [2024-11-08 02:22:21.380772] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.587 [2024-11-08 02:22:21.380780] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.587 [2024-11-08 02:22:21.380784] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.380788] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f70c0) on tqpair=0x8b0bd0 00:18:19.587 [2024-11-08 02:22:21.380794] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:19.587 [2024-11-08 02:22:21.380805] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.380810] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.380814] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0bd0) 00:18:19.587 [2024-11-08 02:22:21.380822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.587 [2024-11-08 02:22:21.380841] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f70c0, cid 0, qid 0 00:18:19.587 [2024-11-08 02:22:21.380885] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.587 [2024-11-08 02:22:21.380892] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.587 [2024-11-08 02:22:21.380896] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.587 [2024-11-08 02:22:21.380901] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f70c0) on tqpair=0x8b0bd0 00:18:19.587 [2024-11-08 02:22:21.380906] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:18:19.587 [2024-11-08 02:22:21.380912] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:18:19.587 [2024-11-08 02:22:21.380920] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:19.587 [2024-11-08 02:22:21.381028] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:18:19.587 [2024-11-08 02:22:21.381032] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:19.587 [2024-11-08 02:22:21.381041] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381046] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381050] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0bd0) 00:18:19.588 [2024-11-08 02:22:21.381073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.588 [2024-11-08 02:22:21.381092] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f70c0, cid 0, qid 0 00:18:19.588 [2024-11-08 02:22:21.381137] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.588 [2024-11-08 02:22:21.381145] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.588 [2024-11-08 02:22:21.381149] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381153] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f70c0) on tqpair=0x8b0bd0 00:18:19.588 [2024-11-08 02:22:21.381159] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:19.588 [2024-11-08 02:22:21.381170] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381175] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381179] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0bd0) 00:18:19.588 [2024-11-08 02:22:21.381187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.588 [2024-11-08 02:22:21.381205] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f70c0, cid 0, qid 0 00:18:19.588 [2024-11-08 02:22:21.381265] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.588 [2024-11-08 02:22:21.381274] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.588 [2024-11-08 02:22:21.381278] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381283] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f70c0) on tqpair=0x8b0bd0 00:18:19.588 [2024-11-08 02:22:21.381288] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:19.588 [2024-11-08 02:22:21.381293] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:18:19.588 [2024-11-08 02:22:21.381302] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:18:19.588 [2024-11-08 02:22:21.381317] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:18:19.588 [2024-11-08 02:22:21.381328] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381332] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0bd0) 00:18:19.588 [2024-11-08 02:22:21.381341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.588 [2024-11-08 02:22:21.381362] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f70c0, cid 0, qid 0 00:18:19.588 [2024-11-08 02:22:21.381444] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:19.588 [2024-11-08 02:22:21.381451] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:19.588 [2024-11-08 02:22:21.381456] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381460] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b0bd0): datao=0, datal=4096, cccid=0 00:18:19.588 [2024-11-08 02:22:21.381465] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f70c0) on tqpair(0x8b0bd0): expected_datao=0, payload_size=4096 00:18:19.588 [2024-11-08 02:22:21.381470] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381478] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381483] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381492] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.588 [2024-11-08 02:22:21.381499] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.588 [2024-11-08 02:22:21.381503] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381507] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f70c0) on tqpair=0x8b0bd0 00:18:19.588 [2024-11-08 02:22:21.381516] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:18:19.588 [2024-11-08 02:22:21.381521] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:18:19.588 [2024-11-08 02:22:21.381526] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:18:19.588 [2024-11-08 02:22:21.381530] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:18:19.588 [2024-11-08 02:22:21.381535] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:18:19.588 [2024-11-08 02:22:21.381541] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:18:19.588 [2024-11-08 02:22:21.381550] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:18:19.588 [2024-11-08 02:22:21.381558] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381563] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0bd0) 00:18:19.588 [2024-11-08 02:22:21.381575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.588 [2024-11-08 02:22:21.381595] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f70c0, cid 0, qid 0 00:18:19.588 [2024-11-08 02:22:21.381645] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.588 [2024-11-08 02:22:21.381652] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.588 [2024-11-08 02:22:21.381656] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381661] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f70c0) on tqpair=0x8b0bd0 00:18:19.588 [2024-11-08 02:22:21.381668] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381673] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381677] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8b0bd0) 00:18:19.588 [2024-11-08 02:22:21.381684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.588 [2024-11-08 02:22:21.381691] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381695] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381699] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8b0bd0) 00:18:19.588 [2024-11-08 02:22:21.381706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.588 [2024-11-08 02:22:21.381712] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381717] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381721] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8b0bd0) 00:18:19.588 [2024-11-08 02:22:21.381727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.588 [2024-11-08 02:22:21.381733] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381738] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381742] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.588 [2024-11-08 02:22:21.381748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.588 [2024-11-08 02:22:21.381754] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:19.588 [2024-11-08 02:22:21.381768] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:19.588 [2024-11-08 02:22:21.381776] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381780] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b0bd0) 00:18:19.588 [2024-11-08 02:22:21.381788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.588 [2024-11-08 02:22:21.381810] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f70c0, cid 0, qid 0 00:18:19.588 [2024-11-08 02:22:21.381817] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7240, cid 1, qid 0 00:18:19.588 [2024-11-08 02:22:21.381823] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f73c0, cid 2, qid 0 00:18:19.588 [2024-11-08 02:22:21.381828] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.588 [2024-11-08 02:22:21.381833] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f76c0, cid 4, qid 0 00:18:19.588 [2024-11-08 02:22:21.381919] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.588 [2024-11-08 02:22:21.381926] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.588 [2024-11-08 02:22:21.381930] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.588 [2024-11-08 02:22:21.381935] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f76c0) on tqpair=0x8b0bd0 00:18:19.588 [2024-11-08 02:22:21.381940] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:18:19.588 [2024-11-08 02:22:21.381946] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:19.588 [2024-11-08 02:22:21.381955] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.381966] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.381974] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.381979] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.381983] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b0bd0) 00:18:19.589 [2024-11-08 02:22:21.381991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:19.589 [2024-11-08 02:22:21.382011] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f76c0, cid 4, qid 0 00:18:19.589 [2024-11-08 02:22:21.382055] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.589 [2024-11-08 02:22:21.382063] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.589 [2024-11-08 02:22:21.382071] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382075] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f76c0) on tqpair=0x8b0bd0 00:18:19.589 [2024-11-08 02:22:21.382156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.382170] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.382180] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382184] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b0bd0) 00:18:19.589 [2024-11-08 02:22:21.382192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.589 [2024-11-08 02:22:21.382215] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f76c0, cid 4, qid 0 00:18:19.589 [2024-11-08 02:22:21.382274] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:19.589 [2024-11-08 02:22:21.382282] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:19.589 [2024-11-08 02:22:21.382286] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382290] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b0bd0): datao=0, datal=4096, cccid=4 00:18:19.589 [2024-11-08 02:22:21.382295] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f76c0) on tqpair(0x8b0bd0): expected_datao=0, payload_size=4096 00:18:19.589 [2024-11-08 02:22:21.382300] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382308] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382313] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382321] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.589 [2024-11-08 02:22:21.382328] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.589 [2024-11-08 02:22:21.382332] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382337] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f76c0) on tqpair=0x8b0bd0 00:18:19.589 [2024-11-08 02:22:21.382347] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:18:19.589 [2024-11-08 02:22:21.382359] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.382371] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.382379] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382384] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b0bd0) 00:18:19.589 [2024-11-08 02:22:21.382392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.589 [2024-11-08 02:22:21.382413] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f76c0, cid 4, qid 0 00:18:19.589 [2024-11-08 02:22:21.382488] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:19.589 [2024-11-08 02:22:21.382495] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:19.589 [2024-11-08 02:22:21.382499] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382503] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b0bd0): datao=0, datal=4096, cccid=4 00:18:19.589 [2024-11-08 02:22:21.382508] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f76c0) on tqpair(0x8b0bd0): expected_datao=0, payload_size=4096 00:18:19.589 [2024-11-08 02:22:21.382513] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382520] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382525] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382533] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.589 [2024-11-08 02:22:21.382540] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.589 [2024-11-08 02:22:21.382544] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382548] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f76c0) on tqpair=0x8b0bd0 00:18:19.589 [2024-11-08 02:22:21.382564] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.382576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.382585] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382589] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b0bd0) 00:18:19.589 [2024-11-08 02:22:21.382597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.589 [2024-11-08 02:22:21.382618] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f76c0, cid 4, qid 0 00:18:19.589 [2024-11-08 02:22:21.382689] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:19.589 [2024-11-08 02:22:21.382696] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:19.589 [2024-11-08 02:22:21.382700] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382704] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b0bd0): datao=0, datal=4096, cccid=4 00:18:19.589 [2024-11-08 02:22:21.382709] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f76c0) on tqpair(0x8b0bd0): expected_datao=0, payload_size=4096 00:18:19.589 [2024-11-08 02:22:21.382714] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382721] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382725] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382734] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.589 [2024-11-08 02:22:21.382740] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.589 [2024-11-08 02:22:21.382744] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382748] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f76c0) on tqpair=0x8b0bd0 00:18:19.589 [2024-11-08 02:22:21.382757] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.382766] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.382777] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.382784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.382790] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.382795] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.382800] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:18:19.589 [2024-11-08 02:22:21.382805] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:18:19.589 [2024-11-08 02:22:21.382811] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:18:19.589 [2024-11-08 02:22:21.382851] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382857] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b0bd0) 00:18:19.589 [2024-11-08 02:22:21.382865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.589 [2024-11-08 02:22:21.382873] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382877] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.382881] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8b0bd0) 00:18:19.589 [2024-11-08 02:22:21.382888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.589 [2024-11-08 02:22:21.382911] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f76c0, cid 4, qid 0 00:18:19.589 [2024-11-08 02:22:21.382918] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7840, cid 5, qid 0 00:18:19.589 [2024-11-08 02:22:21.382983] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.589 [2024-11-08 02:22:21.382991] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.589 [2024-11-08 02:22:21.382995] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.383000] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f76c0) on tqpair=0x8b0bd0 00:18:19.589 [2024-11-08 02:22:21.383007] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.589 [2024-11-08 02:22:21.383013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.589 [2024-11-08 02:22:21.383017] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.383022] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7840) on tqpair=0x8b0bd0 00:18:19.589 [2024-11-08 02:22:21.383033] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.383038] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8b0bd0) 00:18:19.589 [2024-11-08 02:22:21.383045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.589 [2024-11-08 02:22:21.383064] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7840, cid 5, qid 0 00:18:19.589 [2024-11-08 02:22:21.383107] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.589 [2024-11-08 02:22:21.383115] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.589 [2024-11-08 02:22:21.383132] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.383137] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7840) on tqpair=0x8b0bd0 00:18:19.589 [2024-11-08 02:22:21.383150] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.589 [2024-11-08 02:22:21.383155] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8b0bd0) 00:18:19.590 [2024-11-08 02:22:21.383163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.590 [2024-11-08 02:22:21.383182] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7840, cid 5, qid 0 00:18:19.590 [2024-11-08 02:22:21.383239] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.590 [2024-11-08 02:22:21.383247] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.590 [2024-11-08 02:22:21.383251] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383255] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7840) on tqpair=0x8b0bd0 00:18:19.590 [2024-11-08 02:22:21.383266] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383271] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8b0bd0) 00:18:19.590 [2024-11-08 02:22:21.383279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.590 [2024-11-08 02:22:21.383296] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7840, cid 5, qid 0 00:18:19.590 [2024-11-08 02:22:21.383348] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.590 [2024-11-08 02:22:21.383356] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.590 [2024-11-08 02:22:21.383360] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383364] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7840) on tqpair=0x8b0bd0 00:18:19.590 [2024-11-08 02:22:21.383383] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383389] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8b0bd0) 00:18:19.590 [2024-11-08 02:22:21.383397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.590 [2024-11-08 02:22:21.383405] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383409] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8b0bd0) 00:18:19.590 [2024-11-08 02:22:21.383417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.590 [2024-11-08 02:22:21.383440] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383444] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8b0bd0) 00:18:19.590 [2024-11-08 02:22:21.383451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.590 [2024-11-08 02:22:21.383461] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383466] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8b0bd0) 00:18:19.590 [2024-11-08 02:22:21.383473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.590 [2024-11-08 02:22:21.383493] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7840, cid 5, qid 0 00:18:19.590 [2024-11-08 02:22:21.383500] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f76c0, cid 4, qid 0 00:18:19.590 [2024-11-08 02:22:21.383505] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f79c0, cid 6, qid 0 00:18:19.590 [2024-11-08 02:22:21.383510] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7b40, cid 7, qid 0 00:18:19.590 [2024-11-08 02:22:21.383640] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:19.590 [2024-11-08 02:22:21.383647] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:19.590 [2024-11-08 02:22:21.383651] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383655] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b0bd0): datao=0, datal=8192, cccid=5 00:18:19.590 [2024-11-08 02:22:21.383660] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f7840) on tqpair(0x8b0bd0): expected_datao=0, payload_size=8192 00:18:19.590 [2024-11-08 02:22:21.383665] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383682] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383687] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383693] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:19.590 [2024-11-08 02:22:21.383699] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:19.590 [2024-11-08 02:22:21.383703] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383707] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b0bd0): datao=0, datal=512, cccid=4 00:18:19.590 [2024-11-08 02:22:21.383712] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f76c0) on tqpair(0x8b0bd0): expected_datao=0, payload_size=512 00:18:19.590 [2024-11-08 02:22:21.383717] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383723] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383727] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383733] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:19.590 [2024-11-08 02:22:21.383739] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:19.590 [2024-11-08 02:22:21.383742] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383746] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b0bd0): datao=0, datal=512, cccid=6 00:18:19.590 [2024-11-08 02:22:21.383751] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f79c0) on tqpair(0x8b0bd0): expected_datao=0, payload_size=512 00:18:19.590 [2024-11-08 02:22:21.383755] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383762] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383766] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383772] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:19.590 [2024-11-08 02:22:21.383778] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:19.590 [2024-11-08 02:22:21.383781] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383785] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8b0bd0): datao=0, datal=4096, cccid=7 00:18:19.590 [2024-11-08 02:22:21.383790] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f7b40) on tqpair(0x8b0bd0): expected_datao=0, payload_size=4096 00:18:19.590 [2024-11-08 02:22:21.383794] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383801] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383805] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383813] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.590 [2024-11-08 02:22:21.383820] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.590 [2024-11-08 02:22:21.383823] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383828] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7840) on tqpair=0x8b0bd0 00:18:19.590 [2024-11-08 02:22:21.383846] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.590 [2024-11-08 02:22:21.383853] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.590 [2024-11-08 02:22:21.383857] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.590 [2024-11-08 02:22:21.383861] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f76c0) on tqpair=0x8b0bd0 00:18:19.590 ===================================================== 00:18:19.590 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:19.590 ===================================================== 00:18:19.590 Controller Capabilities/Features 00:18:19.590 ================================ 00:18:19.590 Vendor ID: 8086 00:18:19.590 Subsystem Vendor ID: 8086 00:18:19.590 Serial Number: SPDK00000000000001 00:18:19.590 Model Number: SPDK bdev Controller 00:18:19.590 Firmware Version: 24.09.1 00:18:19.590 Recommended Arb Burst: 6 00:18:19.590 IEEE OUI Identifier: e4 d2 5c 00:18:19.590 Multi-path I/O 00:18:19.590 May have multiple subsystem ports: Yes 00:18:19.590 May have multiple controllers: Yes 00:18:19.590 Associated with SR-IOV VF: No 00:18:19.590 Max Data Transfer Size: 131072 00:18:19.590 Max Number of Namespaces: 32 00:18:19.590 Max Number of I/O Queues: 127 00:18:19.590 NVMe Specification Version (VS): 1.3 00:18:19.590 NVMe Specification Version (Identify): 1.3 00:18:19.590 Maximum Queue Entries: 128 00:18:19.590 Contiguous Queues Required: Yes 00:18:19.590 Arbitration Mechanisms Supported 00:18:19.590 Weighted Round Robin: Not Supported 00:18:19.590 Vendor Specific: Not Supported 00:18:19.590 Reset Timeout: 15000 ms 00:18:19.590 Doorbell Stride: 4 bytes 00:18:19.590 NVM Subsystem Reset: Not Supported 00:18:19.590 Command Sets Supported 00:18:19.590 NVM Command Set: Supported 00:18:19.590 Boot Partition: Not Supported 00:18:19.590 Memory Page Size Minimum: 4096 bytes 00:18:19.590 Memory Page Size Maximum: 4096 bytes 00:18:19.590 Persistent Memory Region: Not Supported 00:18:19.590 Optional Asynchronous Events Supported 00:18:19.590 Namespace Attribute Notices: Supported 00:18:19.590 Firmware Activation Notices: Not Supported 00:18:19.590 ANA Change Notices: Not Supported 00:18:19.590 PLE Aggregate Log Change Notices: Not Supported 00:18:19.590 LBA Status Info Alert Notices: Not Supported 00:18:19.590 EGE Aggregate Log Change Notices: Not Supported 00:18:19.590 Normal NVM Subsystem Shutdown event: Not Supported 00:18:19.590 Zone Descriptor Change Notices: Not Supported 00:18:19.590 Discovery Log Change Notices: Not Supported 00:18:19.590 Controller Attributes 00:18:19.590 128-bit Host Identifier: Supported 00:18:19.590 Non-Operational Permissive Mode: Not Supported 00:18:19.590 NVM Sets: Not Supported 00:18:19.590 Read Recovery Levels: Not Supported 00:18:19.590 Endurance Groups: Not Supported 00:18:19.590 Predictable Latency Mode: Not Supported 00:18:19.590 Traffic Based Keep ALive: Not Supported 00:18:19.590 Namespace Granularity: Not Supported 00:18:19.590 SQ Associations: Not Supported 00:18:19.590 UUID List: Not Supported 00:18:19.590 Multi-Domain Subsystem: Not Supported 00:18:19.590 Fixed Capacity Management: Not Supported 00:18:19.590 Variable Capacity Management: Not Supported 00:18:19.590 Delete Endurance Group: Not Supported 00:18:19.590 Delete NVM Set: Not Supported 00:18:19.590 Extended LBA Formats Supported: Not Supported 00:18:19.590 Flexible Data Placement Supported: Not Supported 00:18:19.591 00:18:19.591 Controller Memory Buffer Support 00:18:19.591 ================================ 00:18:19.591 Supported: No 00:18:19.591 00:18:19.591 Persistent Memory Region Support 00:18:19.591 ================================ 00:18:19.591 Supported: No 00:18:19.591 00:18:19.591 Admin Command Set Attributes 00:18:19.591 ============================ 00:18:19.591 Security Send/Receive: Not Supported 00:18:19.591 Format NVM: Not Supported 00:18:19.591 Firmware Activate/Download: Not Supported 00:18:19.591 Namespace Management: Not Supported 00:18:19.591 Device Self-Test: Not Supported 00:18:19.591 Directives: Not Supported 00:18:19.591 NVMe-MI: Not Supported 00:18:19.591 Virtualization Management: Not Supported 00:18:19.591 Doorbell Buffer Config: Not Supported 00:18:19.591 Get LBA Status Capability: Not Supported 00:18:19.591 Command & Feature Lockdown Capability: Not Supported 00:18:19.591 Abort Command Limit: 4 00:18:19.591 Async Event Request Limit: 4 00:18:19.591 Number of Firmware Slots: N/A 00:18:19.591 Firmware Slot 1 Read-Only: N/A 00:18:19.591 Firmware Activation Without Reset: N/A 00:18:19.591 Multiple Update Detection Support: N/A 00:18:19.591 Firmware Update Granularity: No Information Provided 00:18:19.591 Per-Namespace SMART Log: No 00:18:19.591 Asymmetric Namespace Access Log Page: Not Supported 00:18:19.591 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:19.591 Command Effects Log Page: Supported 00:18:19.591 Get Log Page Extended Data: Supported 00:18:19.591 Telemetry Log Pages: Not Supported 00:18:19.591 Persistent Event Log Pages: Not Supported 00:18:19.591 Supported Log Pages Log Page: May Support 00:18:19.591 Commands Supported & Effects Log Page: Not Supported 00:18:19.591 Feature Identifiers & Effects Log Page:May Support 00:18:19.591 NVMe-MI Commands & Effects Log Page: May Support 00:18:19.591 Data Area 4 for Telemetry Log: Not Supported 00:18:19.591 Error Log Page Entries Supported: 128 00:18:19.591 Keep Alive: Supported 00:18:19.591 Keep Alive Granularity: 10000 ms 00:18:19.591 00:18:19.591 NVM Command Set Attributes 00:18:19.591 ========================== 00:18:19.591 Submission Queue Entry Size 00:18:19.591 Max: 64 00:18:19.591 Min: 64 00:18:19.591 Completion Queue Entry Size 00:18:19.591 Max: 16 00:18:19.591 Min: 16 00:18:19.591 Number of Namespaces: 32 00:18:19.591 Compare Command: Supported 00:18:19.591 Write Uncorrectable Command: Not Supported 00:18:19.591 Dataset Management Command: Supported 00:18:19.591 Write Zeroes Command: Supported 00:18:19.591 Set Features Save Field: Not Supported 00:18:19.591 Reservations: Supported 00:18:19.591 Timestamp: Not Supported 00:18:19.591 Copy: Supported 00:18:19.591 Volatile Write Cache: Present 00:18:19.591 Atomic Write Unit (Normal): 1 00:18:19.591 Atomic Write Unit (PFail): 1 00:18:19.591 Atomic Compare & Write Unit: 1 00:18:19.591 Fused Compare & Write: Supported 00:18:19.591 Scatter-Gather List 00:18:19.591 SGL Command Set: Supported 00:18:19.591 SGL Keyed: Supported 00:18:19.591 SGL Bit Bucket Descriptor: Not Supported 00:18:19.591 SGL Metadata Pointer: Not Supported 00:18:19.591 Oversized SGL: Not Supported 00:18:19.591 SGL Metadata Address: Not Supported 00:18:19.591 SGL Offset: Supported 00:18:19.591 Transport SGL Data Block: Not Supported 00:18:19.591 Replay Protected Memory Block: Not Supported 00:18:19.591 00:18:19.591 Firmware Slot Information 00:18:19.591 ========================= 00:18:19.591 Active slot: 1 00:18:19.591 Slot 1 Firmware Revision: 24.09.1 00:18:19.591 00:18:19.591 00:18:19.591 Commands Supported and Effects 00:18:19.591 ============================== 00:18:19.591 Admin Commands 00:18:19.591 -------------- 00:18:19.591 Get Log Page (02h): Supported 00:18:19.591 Identify (06h): Supported 00:18:19.591 Abort (08h): Supported 00:18:19.591 Set Features (09h): Supported 00:18:19.591 Get Features (0Ah): Supported 00:18:19.591 Asynchronous Event Request (0Ch): Supported 00:18:19.591 Keep Alive (18h): Supported 00:18:19.591 I/O Commands 00:18:19.591 ------------ 00:18:19.591 Flush (00h): Supported LBA-Change 00:18:19.591 Write (01h): Supported LBA-Change 00:18:19.591 Read (02h): Supported 00:18:19.591 Compare (05h): Supported 00:18:19.591 Write Zeroes (08h): Supported LBA-Change 00:18:19.591 Dataset Management (09h): Supported LBA-Change 00:18:19.591 Copy (19h): Supported LBA-Change 00:18:19.591 00:18:19.591 Error Log 00:18:19.591 ========= 00:18:19.591 00:18:19.591 Arbitration 00:18:19.591 =========== 00:18:19.591 Arbitration Burst: 1 00:18:19.591 00:18:19.591 Power Management 00:18:19.591 ================ 00:18:19.591 Number of Power States: 1 00:18:19.591 Current Power State: Power State #0 00:18:19.591 Power State #0: 00:18:19.591 Max Power: 0.00 W 00:18:19.591 Non-Operational State: Operational 00:18:19.591 Entry Latency: Not Reported 00:18:19.591 Exit Latency: Not Reported 00:18:19.591 Relative Read Throughput: 0 00:18:19.591 Relative Read Latency: 0 00:18:19.591 Relative Write Throughput: 0 00:18:19.591 Relative Write Latency: 0 00:18:19.591 Idle Power: Not Reported 00:18:19.591 Active Power: Not Reported 00:18:19.591 Non-Operational Permissive Mode: Not Supported 00:18:19.591 00:18:19.591 Health Information 00:18:19.591 ================== 00:18:19.591 Critical Warnings: 00:18:19.591 Available Spare Space: OK 00:18:19.591 Temperature: OK 00:18:19.591 Device Reliability: OK 00:18:19.591 Read Only: No 00:18:19.591 Volatile Memory Backup: OK 00:18:19.591 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:19.591 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:19.591 Available Spare: 0% 00:18:19.591 Available Spare Threshold: 0% 00:18:19.591 Life Percentage U[2024-11-08 02:22:21.383873] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.591 [2024-11-08 02:22:21.383879] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.591 [2024-11-08 02:22:21.383883] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.591 [2024-11-08 02:22:21.383887] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f79c0) on tqpair=0x8b0bd0 00:18:19.591 [2024-11-08 02:22:21.383895] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.591 [2024-11-08 02:22:21.383901] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.591 [2024-11-08 02:22:21.383905] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.591 [2024-11-08 02:22:21.383909] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7b40) on tqpair=0x8b0bd0 00:18:19.591 [2024-11-08 02:22:21.384007] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.591 [2024-11-08 02:22:21.384014] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8b0bd0) 00:18:19.591 [2024-11-08 02:22:21.384022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.591 [2024-11-08 02:22:21.384045] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7b40, cid 7, qid 0 00:18:19.591 [2024-11-08 02:22:21.384098] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.591 [2024-11-08 02:22:21.384105] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.591 [2024-11-08 02:22:21.384109] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.591 [2024-11-08 02:22:21.384113] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7b40) on tqpair=0x8b0bd0 00:18:19.591 [2024-11-08 02:22:21.388235] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:18:19.591 [2024-11-08 02:22:21.388250] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f70c0) on tqpair=0x8b0bd0 00:18:19.591 [2024-11-08 02:22:21.388257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.592 [2024-11-08 02:22:21.388263] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7240) on tqpair=0x8b0bd0 00:18:19.592 [2024-11-08 02:22:21.388268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.592 [2024-11-08 02:22:21.388273] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f73c0) on tqpair=0x8b0bd0 00:18:19.592 [2024-11-08 02:22:21.388278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.592 [2024-11-08 02:22:21.388283] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.592 [2024-11-08 02:22:21.388288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.592 [2024-11-08 02:22:21.388297] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388302] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388306] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.592 [2024-11-08 02:22:21.388314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.592 [2024-11-08 02:22:21.388357] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.592 [2024-11-08 02:22:21.388408] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.592 [2024-11-08 02:22:21.388416] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.592 [2024-11-08 02:22:21.388420] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388425] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.592 [2024-11-08 02:22:21.388433] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388437] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388441] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.592 [2024-11-08 02:22:21.388449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.592 [2024-11-08 02:22:21.388471] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.592 [2024-11-08 02:22:21.388545] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.592 [2024-11-08 02:22:21.388552] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.592 [2024-11-08 02:22:21.388556] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388560] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.592 [2024-11-08 02:22:21.388565] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:18:19.592 [2024-11-08 02:22:21.388569] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:18:19.592 [2024-11-08 02:22:21.388580] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388585] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388588] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.592 [2024-11-08 02:22:21.388596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.592 [2024-11-08 02:22:21.388613] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.592 [2024-11-08 02:22:21.388656] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.592 [2024-11-08 02:22:21.388663] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.592 [2024-11-08 02:22:21.388667] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388671] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.592 [2024-11-08 02:22:21.388682] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388687] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388691] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.592 [2024-11-08 02:22:21.388699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.592 [2024-11-08 02:22:21.388716] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.592 [2024-11-08 02:22:21.388760] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.592 [2024-11-08 02:22:21.388767] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.592 [2024-11-08 02:22:21.388771] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388775] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.592 [2024-11-08 02:22:21.388785] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388790] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388794] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.592 [2024-11-08 02:22:21.388801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.592 [2024-11-08 02:22:21.388818] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.592 [2024-11-08 02:22:21.388867] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.592 [2024-11-08 02:22:21.388874] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.592 [2024-11-08 02:22:21.388878] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388882] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.592 [2024-11-08 02:22:21.388892] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388897] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388901] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.592 [2024-11-08 02:22:21.388908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.592 [2024-11-08 02:22:21.388925] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.592 [2024-11-08 02:22:21.388969] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.592 [2024-11-08 02:22:21.388975] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.592 [2024-11-08 02:22:21.388980] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.388984] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.592 [2024-11-08 02:22:21.388995] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.389000] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.389003] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.592 [2024-11-08 02:22:21.389011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.592 [2024-11-08 02:22:21.389028] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.592 [2024-11-08 02:22:21.389069] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.592 [2024-11-08 02:22:21.389076] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.592 [2024-11-08 02:22:21.389079] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.389083] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.592 [2024-11-08 02:22:21.389094] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.389099] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.389102] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.592 [2024-11-08 02:22:21.389125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.592 [2024-11-08 02:22:21.389158] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.592 [2024-11-08 02:22:21.389225] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.592 [2024-11-08 02:22:21.389233] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.592 [2024-11-08 02:22:21.389237] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.389241] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.592 [2024-11-08 02:22:21.389252] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.389257] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.389261] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.592 [2024-11-08 02:22:21.389269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.592 [2024-11-08 02:22:21.389289] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.592 [2024-11-08 02:22:21.389336] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.592 [2024-11-08 02:22:21.389343] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.592 [2024-11-08 02:22:21.389347] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.389351] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.592 [2024-11-08 02:22:21.389363] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.389368] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.592 [2024-11-08 02:22:21.389372] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.592 [2024-11-08 02:22:21.389380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.592 [2024-11-08 02:22:21.389398] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.593 [2024-11-08 02:22:21.389441] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.593 [2024-11-08 02:22:21.389448] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.593 [2024-11-08 02:22:21.389452] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389456] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.593 [2024-11-08 02:22:21.389467] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389472] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389476] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.593 [2024-11-08 02:22:21.389484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.593 [2024-11-08 02:22:21.389516] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.593 [2024-11-08 02:22:21.389592] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.593 [2024-11-08 02:22:21.389605] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.593 [2024-11-08 02:22:21.389609] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389614] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.593 [2024-11-08 02:22:21.389625] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389630] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389634] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.593 [2024-11-08 02:22:21.389641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.593 [2024-11-08 02:22:21.389660] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.593 [2024-11-08 02:22:21.389702] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.593 [2024-11-08 02:22:21.389713] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.593 [2024-11-08 02:22:21.389718] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389722] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.593 [2024-11-08 02:22:21.389733] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389738] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389742] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.593 [2024-11-08 02:22:21.389750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.593 [2024-11-08 02:22:21.389768] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.593 [2024-11-08 02:22:21.389816] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.593 [2024-11-08 02:22:21.389823] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.593 [2024-11-08 02:22:21.389827] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389831] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.593 [2024-11-08 02:22:21.389842] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389847] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389850] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.593 [2024-11-08 02:22:21.389858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.593 [2024-11-08 02:22:21.389890] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.593 [2024-11-08 02:22:21.389934] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.593 [2024-11-08 02:22:21.389941] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.593 [2024-11-08 02:22:21.389945] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389949] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.593 [2024-11-08 02:22:21.389959] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389964] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.389968] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.593 [2024-11-08 02:22:21.389975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.593 [2024-11-08 02:22:21.389992] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.593 [2024-11-08 02:22:21.390038] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.593 [2024-11-08 02:22:21.390045] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.593 [2024-11-08 02:22:21.390049] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390053] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.593 [2024-11-08 02:22:21.390063] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390068] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390072] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.593 [2024-11-08 02:22:21.390079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.593 [2024-11-08 02:22:21.390096] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.593 [2024-11-08 02:22:21.390158] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.593 [2024-11-08 02:22:21.390167] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.593 [2024-11-08 02:22:21.390171] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390175] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.593 [2024-11-08 02:22:21.390186] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390191] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390195] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.593 [2024-11-08 02:22:21.390203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.593 [2024-11-08 02:22:21.390223] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.593 [2024-11-08 02:22:21.390268] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.593 [2024-11-08 02:22:21.390275] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.593 [2024-11-08 02:22:21.390278] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390283] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.593 [2024-11-08 02:22:21.390293] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390298] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390302] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.593 [2024-11-08 02:22:21.390310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.593 [2024-11-08 02:22:21.390327] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.593 [2024-11-08 02:22:21.390370] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.593 [2024-11-08 02:22:21.390376] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.593 [2024-11-08 02:22:21.390380] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390385] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.593 [2024-11-08 02:22:21.390395] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390400] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390404] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.593 [2024-11-08 02:22:21.390412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.593 [2024-11-08 02:22:21.390429] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.593 [2024-11-08 02:22:21.390480] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.593 [2024-11-08 02:22:21.390487] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.593 [2024-11-08 02:22:21.390491] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390509] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.593 [2024-11-08 02:22:21.390520] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390525] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390528] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.593 [2024-11-08 02:22:21.390536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.593 [2024-11-08 02:22:21.390553] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.593 [2024-11-08 02:22:21.390601] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.593 [2024-11-08 02:22:21.390608] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.593 [2024-11-08 02:22:21.390612] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390616] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.593 [2024-11-08 02:22:21.390626] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390631] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390635] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.593 [2024-11-08 02:22:21.390642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.593 [2024-11-08 02:22:21.390659] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.593 [2024-11-08 02:22:21.390700] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.593 [2024-11-08 02:22:21.390707] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.593 [2024-11-08 02:22:21.390710] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390714] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.593 [2024-11-08 02:22:21.390725] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390729] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.593 [2024-11-08 02:22:21.390733] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.593 [2024-11-08 02:22:21.390741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.593 [2024-11-08 02:22:21.390757] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.594 [2024-11-08 02:22:21.390802] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.594 [2024-11-08 02:22:21.390808] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.594 [2024-11-08 02:22:21.390812] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.390816] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.594 [2024-11-08 02:22:21.390853] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.390859] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.390863] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.594 [2024-11-08 02:22:21.390871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.594 [2024-11-08 02:22:21.390890] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.594 [2024-11-08 02:22:21.390938] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.594 [2024-11-08 02:22:21.390945] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.594 [2024-11-08 02:22:21.390949] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.390953] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.594 [2024-11-08 02:22:21.390964] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.390969] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.390973] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.594 [2024-11-08 02:22:21.390981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.594 [2024-11-08 02:22:21.390999] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.594 [2024-11-08 02:22:21.391046] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.594 [2024-11-08 02:22:21.391053] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.594 [2024-11-08 02:22:21.391057] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391061] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.594 [2024-11-08 02:22:21.391072] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391077] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391081] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.594 [2024-11-08 02:22:21.391089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.594 [2024-11-08 02:22:21.391107] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.594 [2024-11-08 02:22:21.391166] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.594 [2024-11-08 02:22:21.391175] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.594 [2024-11-08 02:22:21.391179] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391183] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.594 [2024-11-08 02:22:21.391195] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391200] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391204] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.594 [2024-11-08 02:22:21.391212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.594 [2024-11-08 02:22:21.391232] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.594 [2024-11-08 02:22:21.391279] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.594 [2024-11-08 02:22:21.391286] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.594 [2024-11-08 02:22:21.391290] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391295] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.594 [2024-11-08 02:22:21.391306] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391311] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391315] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.594 [2024-11-08 02:22:21.391323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.594 [2024-11-08 02:22:21.391341] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.594 [2024-11-08 02:22:21.391390] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.594 [2024-11-08 02:22:21.391402] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.594 [2024-11-08 02:22:21.391406] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391411] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.594 [2024-11-08 02:22:21.391422] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391427] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391431] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.594 [2024-11-08 02:22:21.391439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.594 [2024-11-08 02:22:21.391458] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.594 [2024-11-08 02:22:21.391529] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.594 [2024-11-08 02:22:21.391536] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.594 [2024-11-08 02:22:21.391540] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391544] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.594 [2024-11-08 02:22:21.391554] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391559] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391563] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.594 [2024-11-08 02:22:21.391570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.594 [2024-11-08 02:22:21.391587] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.594 [2024-11-08 02:22:21.391627] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.594 [2024-11-08 02:22:21.391634] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.594 [2024-11-08 02:22:21.391638] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391642] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.594 [2024-11-08 02:22:21.391652] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391657] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391661] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.594 [2024-11-08 02:22:21.391668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.594 [2024-11-08 02:22:21.391685] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.594 [2024-11-08 02:22:21.391728] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.594 [2024-11-08 02:22:21.391735] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.594 [2024-11-08 02:22:21.391739] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391743] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.594 [2024-11-08 02:22:21.391753] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391758] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391762] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.594 [2024-11-08 02:22:21.391770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.594 [2024-11-08 02:22:21.391786] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.594 [2024-11-08 02:22:21.391826] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.594 [2024-11-08 02:22:21.391833] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.594 [2024-11-08 02:22:21.391837] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391841] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.594 [2024-11-08 02:22:21.391851] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391856] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391859] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.594 [2024-11-08 02:22:21.391867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.594 [2024-11-08 02:22:21.391884] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.594 [2024-11-08 02:22:21.391927] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.594 [2024-11-08 02:22:21.391934] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.594 [2024-11-08 02:22:21.391937] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391942] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.594 [2024-11-08 02:22:21.391952] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391957] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.391960] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.594 [2024-11-08 02:22:21.391968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.594 [2024-11-08 02:22:21.391985] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.594 [2024-11-08 02:22:21.392031] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.594 [2024-11-08 02:22:21.392038] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.594 [2024-11-08 02:22:21.392041] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.392045] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.594 [2024-11-08 02:22:21.392056] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.392060] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.594 [2024-11-08 02:22:21.392064] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.594 [2024-11-08 02:22:21.392071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.594 [2024-11-08 02:22:21.392089] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.595 [2024-11-08 02:22:21.396219] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.595 [2024-11-08 02:22:21.396241] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.595 [2024-11-08 02:22:21.396246] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.595 [2024-11-08 02:22:21.396251] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.595 [2024-11-08 02:22:21.396267] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:19.595 [2024-11-08 02:22:21.396273] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:19.595 [2024-11-08 02:22:21.396278] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8b0bd0) 00:18:19.595 [2024-11-08 02:22:21.396287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:19.595 [2024-11-08 02:22:21.396318] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f7540, cid 3, qid 0 00:18:19.595 [2024-11-08 02:22:21.396371] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:19.595 [2024-11-08 02:22:21.396379] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:19.595 [2024-11-08 02:22:21.396383] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:19.595 [2024-11-08 02:22:21.396387] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f7540) on tqpair=0x8b0bd0 00:18:19.595 [2024-11-08 02:22:21.396396] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:18:19.595 sed: 0% 00:18:19.595 Data Units Read: 0 00:18:19.595 Data Units Written: 0 00:18:19.595 Host Read Commands: 0 00:18:19.595 Host Write Commands: 0 00:18:19.595 Controller Busy Time: 0 minutes 00:18:19.595 Power Cycles: 0 00:18:19.595 Power On Hours: 0 hours 00:18:19.595 Unsafe Shutdowns: 0 00:18:19.595 Unrecoverable Media Errors: 0 00:18:19.595 Lifetime Error Log Entries: 0 00:18:19.595 Warning Temperature Time: 0 minutes 00:18:19.595 Critical Temperature Time: 0 minutes 00:18:19.595 00:18:19.595 Number of Queues 00:18:19.595 ================ 00:18:19.595 Number of I/O Submission Queues: 127 00:18:19.595 Number of I/O Completion Queues: 127 00:18:19.595 00:18:19.595 Active Namespaces 00:18:19.595 ================= 00:18:19.595 Namespace ID:1 00:18:19.595 Error Recovery Timeout: Unlimited 00:18:19.595 Command Set Identifier: NVM (00h) 00:18:19.595 Deallocate: Supported 00:18:19.595 Deallocated/Unwritten Error: Not Supported 00:18:19.595 Deallocated Read Value: Unknown 00:18:19.595 Deallocate in Write Zeroes: Not Supported 00:18:19.595 Deallocated Guard Field: 0xFFFF 00:18:19.595 Flush: Supported 00:18:19.595 Reservation: Supported 00:18:19.595 Namespace Sharing Capabilities: Multiple Controllers 00:18:19.595 Size (in LBAs): 131072 (0GiB) 00:18:19.595 Capacity (in LBAs): 131072 (0GiB) 00:18:19.595 Utilization (in LBAs): 131072 (0GiB) 00:18:19.595 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:19.595 EUI64: ABCDEF0123456789 00:18:19.595 UUID: c918ba87-3d9a-4647-a89e-55220dc7ab60 00:18:19.595 Thin Provisioning: Not Supported 00:18:19.595 Per-NS Atomic Units: Yes 00:18:19.595 Atomic Boundary Size (Normal): 0 00:18:19.595 Atomic Boundary Size (PFail): 0 00:18:19.595 Atomic Boundary Offset: 0 00:18:19.595 Maximum Single Source Range Length: 65535 00:18:19.595 Maximum Copy Length: 65535 00:18:19.595 Maximum Source Range Count: 1 00:18:19.595 NGUID/EUI64 Never Reused: No 00:18:19.595 Namespace Write Protected: No 00:18:19.595 Number of LBA Formats: 1 00:18:19.595 Current LBA Format: LBA Format #00 00:18:19.595 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:19.595 00:18:19.595 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:18:19.595 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:19.595 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.595 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:19.856 rmmod nvme_tcp 00:18:19.856 rmmod nvme_fabrics 00:18:19.856 rmmod nvme_keyring 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 88842 ']' 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 88842 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 88842 ']' 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 88842 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88842 00:18:19.856 killing process with pid 88842 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88842' 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 88842 00:18:19.856 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 88842 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:18:20.115 ************************************ 00:18:20.115 END TEST nvmf_identify 00:18:20.115 ************************************ 00:18:20.115 00:18:20.115 real 0m2.056s 00:18:20.115 user 0m4.140s 00:18:20.115 sys 0m0.683s 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:20.115 02:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.375 ************************************ 00:18:20.375 START TEST nvmf_perf 00:18:20.375 ************************************ 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:20.375 * Looking for test storage... 00:18:20.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.375 --rc genhtml_branch_coverage=1 00:18:20.375 --rc genhtml_function_coverage=1 00:18:20.375 --rc genhtml_legend=1 00:18:20.375 --rc geninfo_all_blocks=1 00:18:20.375 --rc geninfo_unexecuted_blocks=1 00:18:20.375 00:18:20.375 ' 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.375 --rc genhtml_branch_coverage=1 00:18:20.375 --rc genhtml_function_coverage=1 00:18:20.375 --rc genhtml_legend=1 00:18:20.375 --rc geninfo_all_blocks=1 00:18:20.375 --rc geninfo_unexecuted_blocks=1 00:18:20.375 00:18:20.375 ' 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.375 --rc genhtml_branch_coverage=1 00:18:20.375 --rc genhtml_function_coverage=1 00:18:20.375 --rc genhtml_legend=1 00:18:20.375 --rc geninfo_all_blocks=1 00:18:20.375 --rc geninfo_unexecuted_blocks=1 00:18:20.375 00:18:20.375 ' 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.375 --rc genhtml_branch_coverage=1 00:18:20.375 --rc genhtml_function_coverage=1 00:18:20.375 --rc genhtml_legend=1 00:18:20.375 --rc geninfo_all_blocks=1 00:18:20.375 --rc geninfo_unexecuted_blocks=1 00:18:20.375 00:18:20.375 ' 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:18:20.375 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:20.376 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.376 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:20.673 Cannot find device "nvmf_init_br" 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:20.673 Cannot find device "nvmf_init_br2" 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:20.673 Cannot find device "nvmf_tgt_br" 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:20.673 Cannot find device "nvmf_tgt_br2" 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:20.673 Cannot find device "nvmf_init_br" 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:20.673 Cannot find device "nvmf_init_br2" 00:18:20.673 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:20.674 Cannot find device "nvmf_tgt_br" 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:20.674 Cannot find device "nvmf_tgt_br2" 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:20.674 Cannot find device "nvmf_br" 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:20.674 Cannot find device "nvmf_init_if" 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:20.674 Cannot find device "nvmf_init_if2" 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:20.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:20.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:20.674 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:20.938 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:20.938 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:18:20.938 00:18:20.938 --- 10.0.0.3 ping statistics --- 00:18:20.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.938 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:20.938 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:20.938 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:18:20.938 00:18:20.938 --- 10.0.0.4 ping statistics --- 00:18:20.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.938 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:20.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:20.938 00:18:20.938 --- 10.0.0.1 ping statistics --- 00:18:20.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.938 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:20.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:18:20.938 00:18:20.938 --- 10.0.0.2 ping statistics --- 00:18:20.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.938 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=89088 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 89088 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 89088 ']' 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.938 02:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:20.938 [2024-11-08 02:22:22.756508] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:20.938 [2024-11-08 02:22:22.756596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.206 [2024-11-08 02:22:22.897705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:21.206 [2024-11-08 02:22:22.938715] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.206 [2024-11-08 02:22:22.938774] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.206 [2024-11-08 02:22:22.938789] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.206 [2024-11-08 02:22:22.938800] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.206 [2024-11-08 02:22:22.938809] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.206 [2024-11-08 02:22:22.938906] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.206 [2024-11-08 02:22:22.939720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.206 [2024-11-08 02:22:22.939858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:21.206 [2024-11-08 02:22:22.939867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.206 [2024-11-08 02:22:22.972670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:21.206 02:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.206 02:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:18:21.206 02:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:21.206 02:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:21.206 02:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:21.207 02:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.207 02:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:21.207 02:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:18:21.771 02:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:18:21.771 02:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:22.029 02:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:18:22.029 02:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:22.286 02:22:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:22.286 02:22:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:22.286 02:22:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:22.286 02:22:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:22.286 02:22:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:22.543 [2024-11-08 02:22:24.354961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.543 02:22:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:23.109 02:22:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:23.109 02:22:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:23.109 02:22:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:23.109 02:22:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:23.367 02:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:23.626 [2024-11-08 02:22:25.428415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:23.626 02:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:23.884 02:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:23.884 02:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:23.884 02:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:23.884 02:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:25.259 Initializing NVMe Controllers 00:18:25.259 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:25.259 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:25.259 Initialization complete. Launching workers. 00:18:25.259 ======================================================== 00:18:25.259 Latency(us) 00:18:25.259 Device Information : IOPS MiB/s Average min max 00:18:25.259 PCIE (0000:00:10.0) NSID 1 from core 0: 22964.00 89.70 1393.30 327.99 6776.88 00:18:25.259 ======================================================== 00:18:25.259 Total : 22964.00 89.70 1393.30 327.99 6776.88 00:18:25.259 00:18:25.259 02:22:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:26.635 Initializing NVMe Controllers 00:18:26.635 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:26.635 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:26.635 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:26.635 Initialization complete. Launching workers. 00:18:26.635 ======================================================== 00:18:26.635 Latency(us) 00:18:26.635 Device Information : IOPS MiB/s Average min max 00:18:26.635 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3887.89 15.19 256.87 95.08 5263.20 00:18:26.635 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8063.54 6744.50 12020.05 00:18:26.635 ======================================================== 00:18:26.635 Total : 4012.89 15.68 500.04 95.08 12020.05 00:18:26.635 00:18:26.635 02:22:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:28.013 Initializing NVMe Controllers 00:18:28.013 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:28.013 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:28.013 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:28.013 Initialization complete. Launching workers. 00:18:28.013 ======================================================== 00:18:28.013 Latency(us) 00:18:28.013 Device Information : IOPS MiB/s Average min max 00:18:28.013 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9218.64 36.01 3473.84 601.46 8141.85 00:18:28.013 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3989.10 15.58 8034.85 4162.30 15200.27 00:18:28.013 ======================================================== 00:18:28.013 Total : 13207.74 51.59 4851.39 601.46 15200.27 00:18:28.013 00:18:28.013 02:22:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:28.013 02:22:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:30.547 Initializing NVMe Controllers 00:18:30.547 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:30.547 Controller IO queue size 128, less than required. 00:18:30.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:30.547 Controller IO queue size 128, less than required. 00:18:30.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:30.547 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:30.547 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:30.547 Initialization complete. Launching workers. 00:18:30.547 ======================================================== 00:18:30.547 Latency(us) 00:18:30.547 Device Information : IOPS MiB/s Average min max 00:18:30.547 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1904.92 476.23 68259.30 41932.13 105394.07 00:18:30.547 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 681.54 170.39 193613.09 78876.08 334421.13 00:18:30.547 ======================================================== 00:18:30.547 Total : 2586.46 646.62 101290.41 41932.13 334421.13 00:18:30.547 00:18:30.547 02:22:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:30.547 Initializing NVMe Controllers 00:18:30.547 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:30.547 Controller IO queue size 128, less than required. 00:18:30.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:30.547 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:30.547 Controller IO queue size 128, less than required. 00:18:30.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:30.547 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:30.547 WARNING: Some requested NVMe devices were skipped 00:18:30.547 No valid NVMe controllers or AIO or URING devices found 00:18:30.547 02:22:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:33.081 Initializing NVMe Controllers 00:18:33.081 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:33.081 Controller IO queue size 128, less than required. 00:18:33.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:33.081 Controller IO queue size 128, less than required. 00:18:33.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:33.081 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:33.081 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:33.081 Initialization complete. Launching workers. 00:18:33.081 00:18:33.081 ==================== 00:18:33.081 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:33.081 TCP transport: 00:18:33.081 polls: 10294 00:18:33.081 idle_polls: 5801 00:18:33.081 sock_completions: 4493 00:18:33.081 nvme_completions: 7013 00:18:33.081 submitted_requests: 10434 00:18:33.081 queued_requests: 1 00:18:33.081 00:18:33.081 ==================== 00:18:33.081 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:33.081 TCP transport: 00:18:33.081 polls: 13564 00:18:33.081 idle_polls: 9467 00:18:33.081 sock_completions: 4097 00:18:33.081 nvme_completions: 6581 00:18:33.081 submitted_requests: 9842 00:18:33.081 queued_requests: 1 00:18:33.081 ======================================================== 00:18:33.081 Latency(us) 00:18:33.081 Device Information : IOPS MiB/s Average min max 00:18:33.081 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1752.91 438.23 73927.80 43357.54 112152.41 00:18:33.081 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1644.92 411.23 79480.49 26436.59 142091.44 00:18:33.081 ======================================================== 00:18:33.081 Total : 3397.83 849.46 76615.91 26436.59 142091.44 00:18:33.081 00:18:33.081 02:22:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:33.340 02:22:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.599 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:18:33.599 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:18:33.599 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:18:33.858 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=ce889265-2daa-443d-96d8-6e29a88e7131 00:18:33.858 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb ce889265-2daa-443d-96d8-6e29a88e7131 00:18:33.858 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=ce889265-2daa-443d-96d8-6e29a88e7131 00:18:33.858 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:33.858 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:33.858 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:33.858 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:34.117 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:34.117 { 00:18:34.117 "uuid": "ce889265-2daa-443d-96d8-6e29a88e7131", 00:18:34.117 "name": "lvs_0", 00:18:34.117 "base_bdev": "Nvme0n1", 00:18:34.117 "total_data_clusters": 1278, 00:18:34.117 "free_clusters": 1278, 00:18:34.117 "block_size": 4096, 00:18:34.117 "cluster_size": 4194304 00:18:34.117 } 00:18:34.117 ]' 00:18:34.117 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="ce889265-2daa-443d-96d8-6e29a88e7131") .free_clusters' 00:18:34.117 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:18:34.117 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="ce889265-2daa-443d-96d8-6e29a88e7131") .cluster_size' 00:18:34.117 5112 00:18:34.117 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:34.117 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:18:34.117 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:18:34.117 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:18:34.117 02:22:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ce889265-2daa-443d-96d8-6e29a88e7131 lbd_0 5112 00:18:34.376 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=773a7196-f62d-4434-807b-84a5fb69dfd5 00:18:34.376 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 773a7196-f62d-4434-807b-84a5fb69dfd5 lvs_n_0 00:18:34.635 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=1b9774e4-c759-40a7-97e9-bd076d43f711 00:18:34.635 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 1b9774e4-c759-40a7-97e9-bd076d43f711 00:18:34.635 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=1b9774e4-c759-40a7-97e9-bd076d43f711 00:18:34.635 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:34.635 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:34.635 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:34.635 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:34.893 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:34.893 { 00:18:34.893 "uuid": "ce889265-2daa-443d-96d8-6e29a88e7131", 00:18:34.893 "name": "lvs_0", 00:18:34.893 "base_bdev": "Nvme0n1", 00:18:34.893 "total_data_clusters": 1278, 00:18:34.893 "free_clusters": 0, 00:18:34.893 "block_size": 4096, 00:18:34.893 "cluster_size": 4194304 00:18:34.893 }, 00:18:34.893 { 00:18:34.893 "uuid": "1b9774e4-c759-40a7-97e9-bd076d43f711", 00:18:34.893 "name": "lvs_n_0", 00:18:34.893 "base_bdev": "773a7196-f62d-4434-807b-84a5fb69dfd5", 00:18:34.893 "total_data_clusters": 1276, 00:18:34.893 "free_clusters": 1276, 00:18:34.893 "block_size": 4096, 00:18:34.893 "cluster_size": 4194304 00:18:34.893 } 00:18:34.893 ]' 00:18:34.893 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="1b9774e4-c759-40a7-97e9-bd076d43f711") .free_clusters' 00:18:34.893 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:18:34.893 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="1b9774e4-c759-40a7-97e9-bd076d43f711") .cluster_size' 00:18:35.151 5104 00:18:35.151 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:35.151 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:18:35.151 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:18:35.151 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:18:35.151 02:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1b9774e4-c759-40a7-97e9-bd076d43f711 lbd_nest_0 5104 00:18:35.410 02:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=ed9fb2e6-8615-4db6-a41e-5fcba9942a26 00:18:35.410 02:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:35.668 02:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:18:35.668 02:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 ed9fb2e6-8615-4db6-a41e-5fcba9942a26 00:18:35.927 02:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:36.186 02:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:18:36.186 02:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:18:36.186 02:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:36.186 02:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:36.186 02:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:36.445 Initializing NVMe Controllers 00:18:36.445 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:36.445 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:36.445 WARNING: Some requested NVMe devices were skipped 00:18:36.445 No valid NVMe controllers or AIO or URING devices found 00:18:36.703 02:22:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:36.703 02:22:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:46.681 Initializing NVMe Controllers 00:18:46.681 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:46.681 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:46.681 Initialization complete. Launching workers. 00:18:46.681 ======================================================== 00:18:46.681 Latency(us) 00:18:46.681 Device Information : IOPS MiB/s Average min max 00:18:46.681 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 970.30 121.29 1030.20 321.59 8478.85 00:18:46.681 ======================================================== 00:18:46.681 Total : 970.30 121.29 1030.20 321.59 8478.85 00:18:46.681 00:18:46.940 02:22:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:46.940 02:22:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:46.940 02:22:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:47.198 Initializing NVMe Controllers 00:18:47.198 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:47.198 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:47.198 WARNING: Some requested NVMe devices were skipped 00:18:47.198 No valid NVMe controllers or AIO or URING devices found 00:18:47.198 02:22:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:47.198 02:22:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:59.413 Initializing NVMe Controllers 00:18:59.413 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:59.413 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:59.413 Initialization complete. Launching workers. 00:18:59.413 ======================================================== 00:18:59.413 Latency(us) 00:18:59.413 Device Information : IOPS MiB/s Average min max 00:18:59.413 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1319.58 164.95 24284.73 7298.11 62471.16 00:18:59.413 ======================================================== 00:18:59.413 Total : 1319.58 164.95 24284.73 7298.11 62471.16 00:18:59.413 00:18:59.413 02:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:59.413 02:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:59.413 02:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:59.413 Initializing NVMe Controllers 00:18:59.413 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:59.413 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:59.413 WARNING: Some requested NVMe devices were skipped 00:18:59.413 No valid NVMe controllers or AIO or URING devices found 00:18:59.413 02:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:59.413 02:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:09.388 Initializing NVMe Controllers 00:19:09.388 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:09.388 Controller IO queue size 128, less than required. 00:19:09.388 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:09.388 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:09.388 Initialization complete. Launching workers. 00:19:09.388 ======================================================== 00:19:09.388 Latency(us) 00:19:09.388 Device Information : IOPS MiB/s Average min max 00:19:09.388 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4174.89 521.86 30693.15 10340.66 63764.25 00:19:09.388 ======================================================== 00:19:09.388 Total : 4174.89 521.86 30693.15 10340.66 63764.25 00:19:09.388 00:19:09.388 02:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.388 02:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ed9fb2e6-8615-4db6-a41e-5fcba9942a26 00:19:09.388 02:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:09.388 02:23:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 773a7196-f62d-4434-807b-84a5fb69dfd5 00:19:09.388 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:09.647 rmmod nvme_tcp 00:19:09.647 rmmod nvme_fabrics 00:19:09.647 rmmod nvme_keyring 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 89088 ']' 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 89088 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 89088 ']' 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 89088 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89088 00:19:09.647 killing process with pid 89088 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89088' 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 89088 00:19:09.647 02:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 89088 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:11.025 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:11.284 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:11.284 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:11.284 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.284 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.284 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.284 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:19:11.284 ************************************ 00:19:11.284 END TEST nvmf_perf 00:19:11.284 ************************************ 00:19:11.284 00:19:11.284 real 0m50.940s 00:19:11.284 user 3m11.907s 00:19:11.284 sys 0m11.840s 00:19:11.284 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:11.285 02:23:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:11.285 02:23:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:11.285 02:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:11.285 02:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:11.285 02:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.285 ************************************ 00:19:11.285 START TEST nvmf_fio_host 00:19:11.285 ************************************ 00:19:11.285 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:11.285 * Looking for test storage... 00:19:11.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:11.285 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:11.285 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:19:11.285 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:11.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.544 --rc genhtml_branch_coverage=1 00:19:11.544 --rc genhtml_function_coverage=1 00:19:11.544 --rc genhtml_legend=1 00:19:11.544 --rc geninfo_all_blocks=1 00:19:11.544 --rc geninfo_unexecuted_blocks=1 00:19:11.544 00:19:11.544 ' 00:19:11.544 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:11.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.544 --rc genhtml_branch_coverage=1 00:19:11.544 --rc genhtml_function_coverage=1 00:19:11.544 --rc genhtml_legend=1 00:19:11.544 --rc geninfo_all_blocks=1 00:19:11.545 --rc geninfo_unexecuted_blocks=1 00:19:11.545 00:19:11.545 ' 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:11.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.545 --rc genhtml_branch_coverage=1 00:19:11.545 --rc genhtml_function_coverage=1 00:19:11.545 --rc genhtml_legend=1 00:19:11.545 --rc geninfo_all_blocks=1 00:19:11.545 --rc geninfo_unexecuted_blocks=1 00:19:11.545 00:19:11.545 ' 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:11.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.545 --rc genhtml_branch_coverage=1 00:19:11.545 --rc genhtml_function_coverage=1 00:19:11.545 --rc genhtml_legend=1 00:19:11.545 --rc geninfo_all_blocks=1 00:19:11.545 --rc geninfo_unexecuted_blocks=1 00:19:11.545 00:19:11.545 ' 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:11.545 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:11.545 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:11.546 Cannot find device "nvmf_init_br" 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:11.546 Cannot find device "nvmf_init_br2" 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:11.546 Cannot find device "nvmf_tgt_br" 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:11.546 Cannot find device "nvmf_tgt_br2" 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:11.546 Cannot find device "nvmf_init_br" 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:11.546 Cannot find device "nvmf_init_br2" 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:11.546 Cannot find device "nvmf_tgt_br" 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:11.546 Cannot find device "nvmf_tgt_br2" 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:11.546 Cannot find device "nvmf_br" 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:11.546 Cannot find device "nvmf_init_if" 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:11.546 Cannot find device "nvmf_init_if2" 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:11.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:11.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:11.546 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:11.805 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:11.805 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:19:11.805 00:19:11.805 --- 10.0.0.3 ping statistics --- 00:19:11.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.805 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:11.805 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:11.805 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:19:11.805 00:19:11.805 --- 10.0.0.4 ping statistics --- 00:19:11.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.805 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:11.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:11.805 00:19:11.805 --- 10.0.0.1 ping statistics --- 00:19:11.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.805 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:11.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:19:11.805 00:19:11.805 --- 10.0.0.2 ping statistics --- 00:19:11.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.805 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=89959 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 89959 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 89959 ']' 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.805 02:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.064 [2024-11-08 02:23:13.724856] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:12.064 [2024-11-08 02:23:13.724966] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.064 [2024-11-08 02:23:13.868292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.064 [2024-11-08 02:23:13.909493] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.064 [2024-11-08 02:23:13.909557] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.064 [2024-11-08 02:23:13.909579] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.064 [2024-11-08 02:23:13.909588] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.064 [2024-11-08 02:23:13.909596] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.064 [2024-11-08 02:23:13.910295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.064 [2024-11-08 02:23:13.910382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.064 [2024-11-08 02:23:13.910470] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.064 [2024-11-08 02:23:13.910475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.064 [2024-11-08 02:23:13.944062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:12.323 02:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.323 02:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:19:12.323 02:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:12.582 [2024-11-08 02:23:14.263523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.582 02:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:19:12.582 02:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:12.582 02:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.582 02:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:12.841 Malloc1 00:19:12.841 02:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:13.100 02:23:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:13.358 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:13.617 [2024-11-08 02:23:15.446613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:13.617 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:13.876 02:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:14.135 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:14.135 fio-3.35 00:19:14.135 Starting 1 thread 00:19:16.669 00:19:16.669 test: (groupid=0, jobs=1): err= 0: pid=90029: Fri Nov 8 02:23:18 2024 00:19:16.669 read: IOPS=9197, BW=35.9MiB/s (37.7MB/s)(72.1MiB/2006msec) 00:19:16.669 slat (nsec): min=1855, max=304654, avg=2354.29, stdev=3261.41 00:19:16.669 clat (usec): min=2613, max=12943, avg=7253.16, stdev=543.55 00:19:16.669 lat (usec): min=2673, max=12945, avg=7255.52, stdev=543.41 00:19:16.669 clat percentiles (usec): 00:19:16.670 | 1.00th=[ 6194], 5.00th=[ 6521], 10.00th=[ 6652], 20.00th=[ 6849], 00:19:16.670 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7308], 00:19:16.670 | 70.00th=[ 7504], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8160], 00:19:16.670 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[10945], 99.95th=[11994], 00:19:16.670 | 99.99th=[12911] 00:19:16.670 bw ( KiB/s): min=35984, max=37280, per=99.93%, avg=36766.00, stdev=608.64, samples=4 00:19:16.670 iops : min= 8996, max= 9320, avg=9191.50, stdev=152.16, samples=4 00:19:16.670 write: IOPS=9202, BW=35.9MiB/s (37.7MB/s)(72.1MiB/2006msec); 0 zone resets 00:19:16.670 slat (nsec): min=1955, max=284726, avg=2471.04, stdev=2602.73 00:19:16.670 clat (usec): min=2460, max=12965, avg=6614.11, stdev=496.61 00:19:16.670 lat (usec): min=2473, max=12967, avg=6616.58, stdev=496.62 00:19:16.670 clat percentiles (usec): 00:19:16.670 | 1.00th=[ 5669], 5.00th=[ 5932], 10.00th=[ 6063], 20.00th=[ 6259], 00:19:16.670 | 30.00th=[ 6390], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:19:16.670 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373], 00:19:16.670 | 99.00th=[ 7898], 99.50th=[ 8160], 99.90th=[10945], 99.95th=[11994], 00:19:16.670 | 99.99th=[12911] 00:19:16.670 bw ( KiB/s): min=36544, max=37120, per=99.98%, avg=36802.00, stdev=239.50, samples=4 00:19:16.670 iops : min= 9136, max= 9280, avg=9200.50, stdev=59.87, samples=4 00:19:16.670 lat (msec) : 4=0.08%, 10=99.76%, 20=0.16% 00:19:16.670 cpu : usr=69.08%, sys=23.54%, ctx=10, majf=0, minf=8 00:19:16.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:16.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:16.670 issued rwts: total=18451,18460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:16.670 00:19:16.670 Run status group 0 (all jobs): 00:19:16.670 READ: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:19:16.670 WRITE: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:16.670 02:23:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:16.670 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:16.670 fio-3.35 00:19:16.670 Starting 1 thread 00:19:19.224 00:19:19.224 test: (groupid=0, jobs=1): err= 0: pid=90076: Fri Nov 8 02:23:20 2024 00:19:19.224 read: IOPS=8721, BW=136MiB/s (143MB/s)(273MiB/2005msec) 00:19:19.224 slat (usec): min=2, max=107, avg= 3.57, stdev= 2.19 00:19:19.224 clat (usec): min=1766, max=15754, avg=8220.18, stdev=2436.25 00:19:19.224 lat (usec): min=1769, max=15757, avg=8223.75, stdev=2436.29 00:19:19.224 clat percentiles (usec): 00:19:19.224 | 1.00th=[ 3818], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 5932], 00:19:19.224 | 30.00th=[ 6652], 40.00th=[ 7373], 50.00th=[ 8029], 60.00th=[ 8717], 00:19:19.224 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11469], 95.00th=[12518], 00:19:19.224 | 99.00th=[14615], 99.50th=[14877], 99.90th=[15401], 99.95th=[15533], 00:19:19.224 | 99.99th=[15664] 00:19:19.224 bw ( KiB/s): min=61120, max=79776, per=50.54%, avg=70520.00, stdev=9495.30, samples=4 00:19:19.224 iops : min= 3820, max= 4986, avg=4407.50, stdev=593.46, samples=4 00:19:19.224 write: IOPS=5121, BW=80.0MiB/s (83.9MB/s)(144MiB/1804msec); 0 zone resets 00:19:19.224 slat (usec): min=32, max=351, avg=37.44, stdev= 9.36 00:19:19.225 clat (usec): min=4625, max=19120, avg=11544.41, stdev=2120.00 00:19:19.225 lat (usec): min=4658, max=19156, avg=11581.85, stdev=2121.00 00:19:19.225 clat percentiles (usec): 00:19:19.225 | 1.00th=[ 7439], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9765], 00:19:19.225 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:19:19.225 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14484], 95.00th=[15139], 00:19:19.225 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[18220], 00:19:19.225 | 99.99th=[19006] 00:19:19.225 bw ( KiB/s): min=65792, max=81760, per=89.58%, avg=73408.00, stdev=8744.88, samples=4 00:19:19.225 iops : min= 4112, max= 5110, avg=4588.00, stdev=546.56, samples=4 00:19:19.225 lat (msec) : 2=0.01%, 4=0.95%, 10=57.96%, 20=41.07% 00:19:19.225 cpu : usr=83.79%, sys=12.12%, ctx=9, majf=0, minf=4 00:19:19.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:19.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:19.225 issued rwts: total=17486,9239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:19.225 00:19:19.225 Run status group 0 (all jobs): 00:19:19.225 READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=273MiB (286MB), run=2005-2005msec 00:19:19.225 WRITE: bw=80.0MiB/s (83.9MB/s), 80.0MiB/s-80.0MiB/s (83.9MB/s-83.9MB/s), io=144MiB (151MB), run=1804-1804msec 00:19:19.225 02:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:19.225 02:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:19:19.225 02:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:19:19.225 02:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:19:19.225 02:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:19:19.225 02:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:19:19.225 02:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:19.225 02:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:19.225 02:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:19:19.225 02:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:19:19.225 02:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:19.225 02:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:19:19.483 Nvme0n1 00:19:19.483 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:19:19.742 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=938aa46c-b244-492c-8ee0-ff2171f15222 00:19:19.742 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 938aa46c-b244-492c-8ee0-ff2171f15222 00:19:19.742 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=938aa46c-b244-492c-8ee0-ff2171f15222 00:19:19.742 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:19:19.742 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:19:19.742 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:19:19.742 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:19.999 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:19:19.999 { 00:19:19.999 "uuid": "938aa46c-b244-492c-8ee0-ff2171f15222", 00:19:19.999 "name": "lvs_0", 00:19:19.999 "base_bdev": "Nvme0n1", 00:19:20.000 "total_data_clusters": 4, 00:19:20.000 "free_clusters": 4, 00:19:20.000 "block_size": 4096, 00:19:20.000 "cluster_size": 1073741824 00:19:20.000 } 00:19:20.000 ]' 00:19:20.000 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="938aa46c-b244-492c-8ee0-ff2171f15222") .free_clusters' 00:19:20.259 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:19:20.259 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="938aa46c-b244-492c-8ee0-ff2171f15222") .cluster_size' 00:19:20.259 4096 00:19:20.259 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:19:20.259 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:19:20.259 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:19:20.259 02:23:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:19:20.518 a20ae950-022c-45b2-9d31-244850983170 00:19:20.518 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:19:20.777 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:19:21.036 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:21.296 02:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:21.296 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:21.296 fio-3.35 00:19:21.296 Starting 1 thread 00:19:23.830 00:19:23.830 test: (groupid=0, jobs=1): err= 0: pid=90185: Fri Nov 8 02:23:25 2024 00:19:23.830 read: IOPS=6270, BW=24.5MiB/s (25.7MB/s)(49.2MiB/2008msec) 00:19:23.830 slat (nsec): min=1923, max=317128, avg=2671.60, stdev=4210.50 00:19:23.830 clat (usec): min=2973, max=19279, avg=10671.66, stdev=860.18 00:19:23.830 lat (usec): min=2983, max=19281, avg=10674.34, stdev=859.84 00:19:23.830 clat percentiles (usec): 00:19:23.830 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:19:23.830 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:19:23.830 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:19:23.830 | 99.00th=[12518], 99.50th=[12911], 99.90th=[16581], 99.95th=[16909], 00:19:23.830 | 99.99th=[19268] 00:19:23.830 bw ( KiB/s): min=23888, max=25552, per=99.80%, avg=25032.00, stdev=768.97, samples=4 00:19:23.830 iops : min= 5972, max= 6388, avg=6258.00, stdev=192.24, samples=4 00:19:23.830 write: IOPS=6253, BW=24.4MiB/s (25.6MB/s)(49.1MiB/2008msec); 0 zone resets 00:19:23.830 slat (usec): min=2, max=236, avg= 2.78, stdev= 2.85 00:19:23.830 clat (usec): min=2480, max=18142, avg=9673.82, stdev=810.15 00:19:23.830 lat (usec): min=2494, max=18144, avg=9676.61, stdev=809.98 00:19:23.830 clat percentiles (usec): 00:19:23.830 | 1.00th=[ 7963], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:19:23.830 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:19:23.830 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10814], 00:19:23.830 | 99.00th=[11469], 99.50th=[11731], 99.90th=[15533], 99.95th=[16581], 00:19:23.830 | 99.99th=[16909] 00:19:23.830 bw ( KiB/s): min=24896, max=25280, per=99.98%, avg=25010.00, stdev=182.24, samples=4 00:19:23.830 iops : min= 6224, max= 6320, avg=6252.50, stdev=45.56, samples=4 00:19:23.830 lat (msec) : 4=0.06%, 10=43.17%, 20=56.77% 00:19:23.830 cpu : usr=73.04%, sys=21.57%, ctx=11, majf=0, minf=8 00:19:23.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:23.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.830 issued rwts: total=12591,12557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.830 00:19:23.830 Run status group 0 (all jobs): 00:19:23.830 READ: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=49.2MiB (51.6MB), run=2008-2008msec 00:19:23.830 WRITE: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=49.1MiB (51.4MB), run=2008-2008msec 00:19:23.830 02:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:23.830 02:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:19:24.165 02:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d0c44805-067e-4796-a7d0-3facd001c00d 00:19:24.165 02:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d0c44805-067e-4796-a7d0-3facd001c00d 00:19:24.165 02:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=d0c44805-067e-4796-a7d0-3facd001c00d 00:19:24.165 02:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:19:24.165 02:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:19:24.165 02:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:19:24.165 02:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:24.459 02:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:19:24.459 { 00:19:24.459 "uuid": "938aa46c-b244-492c-8ee0-ff2171f15222", 00:19:24.459 "name": "lvs_0", 00:19:24.459 "base_bdev": "Nvme0n1", 00:19:24.459 "total_data_clusters": 4, 00:19:24.459 "free_clusters": 0, 00:19:24.459 "block_size": 4096, 00:19:24.459 "cluster_size": 1073741824 00:19:24.459 }, 00:19:24.459 { 00:19:24.459 "uuid": "d0c44805-067e-4796-a7d0-3facd001c00d", 00:19:24.459 "name": "lvs_n_0", 00:19:24.459 "base_bdev": "a20ae950-022c-45b2-9d31-244850983170", 00:19:24.459 "total_data_clusters": 1022, 00:19:24.459 "free_clusters": 1022, 00:19:24.459 "block_size": 4096, 00:19:24.459 "cluster_size": 4194304 00:19:24.459 } 00:19:24.459 ]' 00:19:24.459 02:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d0c44805-067e-4796-a7d0-3facd001c00d") .free_clusters' 00:19:24.459 02:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:19:24.459 02:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d0c44805-067e-4796-a7d0-3facd001c00d") .cluster_size' 00:19:24.459 4088 00:19:24.459 02:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:19:24.459 02:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:19:24.459 02:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:19:24.459 02:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:19:24.718 e9840cf2-b4a5-41af-bc62-b8ab7e154a24 00:19:24.718 02:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:19:24.976 02:23:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:19:25.235 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:25.494 02:23:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:25.753 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:25.753 fio-3.35 00:19:25.753 Starting 1 thread 00:19:28.287 00:19:28.287 test: (groupid=0, jobs=1): err= 0: pid=90260: Fri Nov 8 02:23:29 2024 00:19:28.287 read: IOPS=5605, BW=21.9MiB/s (23.0MB/s)(44.0MiB/2010msec) 00:19:28.287 slat (nsec): min=1904, max=303699, avg=2734.31, stdev=4051.17 00:19:28.287 clat (usec): min=3331, max=21357, avg=11977.74, stdev=999.60 00:19:28.287 lat (usec): min=3343, max=21359, avg=11980.47, stdev=999.22 00:19:28.287 clat percentiles (usec): 00:19:28.287 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:19:28.287 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:19:28.287 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13173], 95.00th=[13435], 00:19:28.287 | 99.00th=[14222], 99.50th=[14484], 99.90th=[19006], 99.95th=[21103], 00:19:28.287 | 99.99th=[21365] 00:19:28.287 bw ( KiB/s): min=21400, max=22968, per=99.86%, avg=22390.00, stdev=715.14, samples=4 00:19:28.287 iops : min= 5350, max= 5742, avg=5597.50, stdev=178.78, samples=4 00:19:28.287 write: IOPS=5568, BW=21.8MiB/s (22.8MB/s)(43.7MiB/2010msec); 0 zone resets 00:19:28.287 slat (nsec): min=1983, max=300561, avg=2838.86, stdev=3557.42 00:19:28.287 clat (usec): min=2450, max=19985, avg=10818.73, stdev=924.90 00:19:28.287 lat (usec): min=2468, max=19988, avg=10821.57, stdev=924.70 00:19:28.287 clat percentiles (usec): 00:19:28.287 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:19:28.287 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:19:28.287 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:19:28.287 | 99.00th=[12911], 99.50th=[13173], 99.90th=[17171], 99.95th=[18744], 00:19:28.287 | 99.99th=[20055] 00:19:28.287 bw ( KiB/s): min=22144, max=22400, per=100.00%, avg=22274.00, stdev=118.37, samples=4 00:19:28.287 iops : min= 5536, max= 5600, avg=5568.50, stdev=29.59, samples=4 00:19:28.287 lat (msec) : 4=0.05%, 10=8.87%, 20=91.05%, 50=0.03% 00:19:28.287 cpu : usr=75.71%, sys=19.61%, ctx=7, majf=0, minf=8 00:19:28.287 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:28.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.287 issued rwts: total=11267,11192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.287 00:19:28.287 Run status group 0 (all jobs): 00:19:28.287 READ: bw=21.9MiB/s (23.0MB/s), 21.9MiB/s-21.9MiB/s (23.0MB/s-23.0MB/s), io=44.0MiB (46.1MB), run=2010-2010msec 00:19:28.287 WRITE: bw=21.8MiB/s (22.8MB/s), 21.8MiB/s-21.8MiB/s (22.8MB/s-22.8MB/s), io=43.7MiB (45.8MB), run=2010-2010msec 00:19:28.287 02:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:28.287 02:23:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:19:28.287 02:23:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:19:28.547 02:23:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:28.806 02:23:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:19:29.065 02:23:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:29.323 02:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:30.261 02:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:30.261 02:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:30.261 02:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:30.261 02:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:30.261 02:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:30.261 02:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:30.261 02:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:30.261 02:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.261 02:23:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:30.261 rmmod nvme_tcp 00:19:30.261 rmmod nvme_fabrics 00:19:30.261 rmmod nvme_keyring 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 89959 ']' 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 89959 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 89959 ']' 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 89959 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89959 00:19:30.261 killing process with pid 89959 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89959' 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 89959 00:19:30.261 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 89959 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.520 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:30.779 ************************************ 00:19:30.779 END TEST nvmf_fio_host 00:19:30.779 ************************************ 00:19:30.779 00:19:30.779 real 0m19.419s 00:19:30.779 user 1m24.911s 00:19:30.779 sys 0m4.274s 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.779 ************************************ 00:19:30.779 START TEST nvmf_failover 00:19:30.779 ************************************ 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:30.779 * Looking for test storage... 00:19:30.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:19:30.779 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:31.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.039 --rc genhtml_branch_coverage=1 00:19:31.039 --rc genhtml_function_coverage=1 00:19:31.039 --rc genhtml_legend=1 00:19:31.039 --rc geninfo_all_blocks=1 00:19:31.039 --rc geninfo_unexecuted_blocks=1 00:19:31.039 00:19:31.039 ' 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:31.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.039 --rc genhtml_branch_coverage=1 00:19:31.039 --rc genhtml_function_coverage=1 00:19:31.039 --rc genhtml_legend=1 00:19:31.039 --rc geninfo_all_blocks=1 00:19:31.039 --rc geninfo_unexecuted_blocks=1 00:19:31.039 00:19:31.039 ' 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:31.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.039 --rc genhtml_branch_coverage=1 00:19:31.039 --rc genhtml_function_coverage=1 00:19:31.039 --rc genhtml_legend=1 00:19:31.039 --rc geninfo_all_blocks=1 00:19:31.039 --rc geninfo_unexecuted_blocks=1 00:19:31.039 00:19:31.039 ' 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:31.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.039 --rc genhtml_branch_coverage=1 00:19:31.039 --rc genhtml_function_coverage=1 00:19:31.039 --rc genhtml_legend=1 00:19:31.039 --rc geninfo_all_blocks=1 00:19:31.039 --rc geninfo_unexecuted_blocks=1 00:19:31.039 00:19:31.039 ' 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.039 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:31.040 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:31.040 Cannot find device "nvmf_init_br" 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:31.040 Cannot find device "nvmf_init_br2" 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:31.040 Cannot find device "nvmf_tgt_br" 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:31.040 Cannot find device "nvmf_tgt_br2" 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:31.040 Cannot find device "nvmf_init_br" 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:31.040 Cannot find device "nvmf_init_br2" 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:31.040 Cannot find device "nvmf_tgt_br" 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:31.040 Cannot find device "nvmf_tgt_br2" 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:31.040 Cannot find device "nvmf_br" 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:31.040 Cannot find device "nvmf_init_if" 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:31.040 Cannot find device "nvmf_init_if2" 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:31.040 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:31.040 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:31.040 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:31.299 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:31.299 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:31.299 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:31.299 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:31.299 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:31.299 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:31.299 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:31.299 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:31.299 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:31.299 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:31.299 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:31.299 02:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:31.299 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:31.299 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:31.299 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:31.299 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:31.299 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:31.299 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:31.299 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:31.299 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:31.299 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:31.299 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:31.300 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:31.300 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:19:31.300 00:19:31.300 --- 10.0.0.3 ping statistics --- 00:19:31.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.300 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:31.300 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:31.300 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:19:31.300 00:19:31.300 --- 10.0.0.4 ping statistics --- 00:19:31.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.300 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:31.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:19:31.300 00:19:31.300 --- 10.0.0.1 ping statistics --- 00:19:31.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.300 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:31.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:19:31.300 00:19:31.300 --- 10.0.0.2 ping statistics --- 00:19:31.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.300 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:31.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=90562 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 90562 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 90562 ']' 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.300 02:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:31.559 [2024-11-08 02:23:33.209491] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:31.559 [2024-11-08 02:23:33.209770] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.559 [2024-11-08 02:23:33.345428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:31.559 [2024-11-08 02:23:33.376529] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.559 [2024-11-08 02:23:33.376749] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.559 [2024-11-08 02:23:33.376970] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.559 [2024-11-08 02:23:33.377073] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.559 [2024-11-08 02:23:33.377223] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.559 [2024-11-08 02:23:33.377355] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.559 [2024-11-08 02:23:33.377505] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.559 [2024-11-08 02:23:33.377526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.559 [2024-11-08 02:23:33.405131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.496 02:23:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.496 02:23:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:32.496 02:23:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:32.496 02:23:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.496 02:23:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:32.496 02:23:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.496 02:23:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:32.755 [2024-11-08 02:23:34.472672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.755 02:23:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:33.013 Malloc0 00:19:33.013 02:23:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:33.272 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:33.531 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:33.790 [2024-11-08 02:23:35.504839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:33.790 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:34.048 [2024-11-08 02:23:35.741000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:34.048 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:34.310 [2024-11-08 02:23:35.957113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:34.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.310 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=90619 00:19:34.310 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:34.310 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:34.310 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 90619 /var/tmp/bdevperf.sock 00:19:34.310 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 90619 ']' 00:19:34.310 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.310 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.310 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.310 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.310 02:23:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:35.252 02:23:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.252 02:23:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:35.252 02:23:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:35.511 NVMe0n1 00:19:35.511 02:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:35.769 00:19:35.769 02:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:35.769 02:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=90643 00:19:35.769 02:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:37.145 02:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:37.145 02:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:40.433 02:23:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:40.433 00:19:40.433 02:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:40.692 02:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:43.985 02:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:43.985 [2024-11-08 02:23:45.730476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:43.985 02:23:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:44.922 02:23:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:45.181 [2024-11-08 02:23:46.972320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bd130 is same with the state(6) to be set 00:19:45.181 [2024-11-08 02:23:46.972367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bd130 is same with the state(6) to be set 00:19:45.181 [2024-11-08 02:23:46.972394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bd130 is same with the state(6) to be set 00:19:45.181 02:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 90643 00:19:51.758 { 00:19:51.758 "results": [ 00:19:51.758 { 00:19:51.758 "job": "NVMe0n1", 00:19:51.758 "core_mask": "0x1", 00:19:51.758 "workload": "verify", 00:19:51.758 "status": "finished", 00:19:51.758 "verify_range": { 00:19:51.758 "start": 0, 00:19:51.758 "length": 16384 00:19:51.758 }, 00:19:51.758 "queue_depth": 128, 00:19:51.758 "io_size": 4096, 00:19:51.758 "runtime": 15.007886, 00:19:51.758 "iops": 10119.813010306714, 00:19:51.758 "mibps": 39.5305195715106, 00:19:51.758 "io_failed": 3333, 00:19:51.758 "io_timeout": 0, 00:19:51.758 "avg_latency_us": 12348.491313610299, 00:19:51.758 "min_latency_us": 532.48, 00:19:51.758 "max_latency_us": 13464.66909090909 00:19:51.758 } 00:19:51.758 ], 00:19:51.758 "core_count": 1 00:19:51.758 } 00:19:51.758 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 90619 00:19:51.758 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 90619 ']' 00:19:51.758 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 90619 00:19:51.758 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:51.758 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:51.758 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90619 00:19:51.758 killing process with pid 90619 00:19:51.758 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:51.759 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:51.759 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90619' 00:19:51.759 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 90619 00:19:51.759 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 90619 00:19:51.759 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:51.759 [2024-11-08 02:23:36.021502] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:51.759 [2024-11-08 02:23:36.021606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90619 ] 00:19:51.759 [2024-11-08 02:23:36.157498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.759 [2024-11-08 02:23:36.197848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.759 [2024-11-08 02:23:36.230467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:51.759 Running I/O for 15 seconds... 00:19:51.759 7829.00 IOPS, 30.58 MiB/s [2024-11-08T02:23:53.643Z] [2024-11-08 02:23:38.813002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.759 [2024-11-08 02:23:38.813706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.759 [2024-11-08 02:23:38.813740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.759 [2024-11-08 02:23:38.813755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.813767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.813781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.813794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.813808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.813820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.813834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.813847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.813862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.813874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.813888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.813900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.813914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.813926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.813940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.813953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.813967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.813979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.813996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.760 [2024-11-08 02:23:38.814160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.760 [2024-11-08 02:23:38.814186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.760 [2024-11-08 02:23:38.814396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.760 [2024-11-08 02:23:38.814537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.760 [2024-11-08 02:23:38.814551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.814978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.814990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.815017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.815044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.815071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.815098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.815144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.815173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.815200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.815242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.815268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.815294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.815320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.815347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.761 [2024-11-08 02:23:38.815375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.761 [2024-11-08 02:23:38.815388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.815977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.815989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.816003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.816015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.816029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.816041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.816055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.816067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.816081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.816093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.816107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.816127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.762 [2024-11-08 02:23:38.816142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.762 [2024-11-08 02:23:38.816155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:38.816620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2320090 is same with the state(6) to be set 00:19:51.763 [2024-11-08 02:23:38.816649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.763 [2024-11-08 02:23:38.816659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.763 [2024-11-08 02:23:38.816668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71632 len:8 PRP1 0x0 PRP2 0x0 00:19:51.763 [2024-11-08 02:23:38.816682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816725] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2320090 was disconnected and freed. reset controller. 00:19:51.763 [2024-11-08 02:23:38.816742] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:51.763 [2024-11-08 02:23:38.816790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.763 [2024-11-08 02:23:38.816810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.763 [2024-11-08 02:23:38.816837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.763 [2024-11-08 02:23:38.816862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.763 [2024-11-08 02:23:38.816888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:38.816901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:51.763 [2024-11-08 02:23:38.820405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.763 [2024-11-08 02:23:38.820441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fecc0 (9): Bad file descriptor 00:19:51.763 [2024-11-08 02:23:38.853716] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:51.763 8834.00 IOPS, 34.51 MiB/s [2024-11-08T02:23:53.647Z] 9398.67 IOPS, 36.71 MiB/s [2024-11-08T02:23:53.647Z] 9697.00 IOPS, 37.88 MiB/s [2024-11-08T02:23:53.647Z] [2024-11-08 02:23:42.445866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:42.445945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:42.446004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:42.446023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:42.446039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.763 [2024-11-08 02:23:42.446052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.763 [2024-11-08 02:23:42.446067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.764 [2024-11-08 02:23:42.446080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.764 [2024-11-08 02:23:42.446107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.764 [2024-11-08 02:23:42.446181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.764 [2024-11-08 02:23:42.446210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.764 [2024-11-08 02:23:42.446239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.764 [2024-11-08 02:23:42.446749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.764 [2024-11-08 02:23:42.446777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.764 [2024-11-08 02:23:42.446805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.764 [2024-11-08 02:23:42.446827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.764 [2024-11-08 02:23:42.446841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.446855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.765 [2024-11-08 02:23:42.446868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.446883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.765 [2024-11-08 02:23:42.446896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.446939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.765 [2024-11-08 02:23:42.446952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.446968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.765 [2024-11-08 02:23:42.446982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.765 [2024-11-08 02:23:42.447025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.765 [2024-11-08 02:23:42.447558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.765 [2024-11-08 02:23:42.447585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.765 [2024-11-08 02:23:42.447614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.765 [2024-11-08 02:23:42.447641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.765 [2024-11-08 02:23:42.447693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.765 [2024-11-08 02:23:42.447721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.765 [2024-11-08 02:23:42.447762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.765 [2024-11-08 02:23:42.447790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.765 [2024-11-08 02:23:42.447805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.766 [2024-11-08 02:23:42.447818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.447832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.766 [2024-11-08 02:23:42.447845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.447860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.766 [2024-11-08 02:23:42.447873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.447888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.766 [2024-11-08 02:23:42.447901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.447915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.766 [2024-11-08 02:23:42.447928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.447943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.766 [2024-11-08 02:23:42.447972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.447987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.766 [2024-11-08 02:23:42.448000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.448015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.766 [2024-11-08 02:23:42.448029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.448044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.766 [2024-11-08 02:23:42.448063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.448079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.448092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.448123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.448136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.448151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.766 [2024-11-08 02:23:42.449599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.766 [2024-11-08 02:23:42.449627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.766 [2024-11-08 02:23:42.449655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.766 [2024-11-08 02:23:42.449683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.766 [2024-11-08 02:23:42.449710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.766 [2024-11-08 02:23:42.449724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.767 [2024-11-08 02:23:42.449738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.449753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.767 [2024-11-08 02:23:42.449766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.449780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.767 [2024-11-08 02:23:42.449793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.449807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.767 [2024-11-08 02:23:42.449820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.449835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.449855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.449872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.449885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.449899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.449913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.449928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.449940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.449955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.449968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.449982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.449995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.450022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.450050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.450077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.450105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.450146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.450177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.450205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.450240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.450269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.767 [2024-11-08 02:23:42.450297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.767 [2024-11-08 02:23:42.450324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.767 [2024-11-08 02:23:42.450352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.767 [2024-11-08 02:23:42.450380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.767 [2024-11-08 02:23:42.450408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.767 [2024-11-08 02:23:42.450436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.767 [2024-11-08 02:23:42.450464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.767 [2024-11-08 02:23:42.450492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.767 [2024-11-08 02:23:42.450519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.767 [2024-11-08 02:23:42.450547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.767 [2024-11-08 02:23:42.450563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.450597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.450627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.450655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.450683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.450711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.450738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.450766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.450794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.450822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.450849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.450877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.450935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.450966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:42.450986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.451043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.768 [2024-11-08 02:23:42.451058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.768 [2024-11-08 02:23:42.451069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120136 len:8 PRP1 0x0 PRP2 0x0 00:19:51.768 [2024-11-08 02:23:42.451082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.451142] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2320d80 was disconnected and freed. reset controller. 00:19:51.768 [2024-11-08 02:23:42.451163] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:51.768 [2024-11-08 02:23:42.451215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.768 [2024-11-08 02:23:42.451251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.451266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.768 [2024-11-08 02:23:42.451279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.451292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.768 [2024-11-08 02:23:42.451305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.451318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.768 [2024-11-08 02:23:42.451331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:42.451343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:51.768 [2024-11-08 02:23:42.451377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fecc0 (9): Bad file descriptor 00:19:51.768 [2024-11-08 02:23:42.455037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.768 [2024-11-08 02:23:42.489681] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:51.768 9742.80 IOPS, 38.06 MiB/s [2024-11-08T02:23:53.652Z] 9871.67 IOPS, 38.56 MiB/s [2024-11-08T02:23:53.652Z] 9960.86 IOPS, 38.91 MiB/s [2024-11-08T02:23:53.652Z] 10018.25 IOPS, 39.13 MiB/s [2024-11-08T02:23:53.652Z] 10057.11 IOPS, 39.29 MiB/s [2024-11-08T02:23:53.652Z] [2024-11-08 02:23:46.972844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:46.972891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:46.972916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:46.972929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:46.972945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:46.972958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:46.972972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.768 [2024-11-08 02:23:46.973006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:46.973021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.768 [2024-11-08 02:23:46.973034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.768 [2024-11-08 02:23:46.973047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.768 [2024-11-08 02:23:46.973059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.769 [2024-11-08 02:23:46.973499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.973960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.769 [2024-11-08 02:23:46.973989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.769 [2024-11-08 02:23:46.974001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.770 [2024-11-08 02:23:46.974042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.770 [2024-11-08 02:23:46.974501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.770 [2024-11-08 02:23:46.974529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.770 [2024-11-08 02:23:46.974562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.770 [2024-11-08 02:23:46.974588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.770 [2024-11-08 02:23:46.974614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.770 [2024-11-08 02:23:46.974640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.770 [2024-11-08 02:23:46.974665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.770 [2024-11-08 02:23:46.974691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.770 [2024-11-08 02:23:46.974768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.770 [2024-11-08 02:23:46.974781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.771 [2024-11-08 02:23:46.974793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.974807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.771 [2024-11-08 02:23:46.974819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.974833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.771 [2024-11-08 02:23:46.974845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.974859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.771 [2024-11-08 02:23:46.974877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.974891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.771 [2024-11-08 02:23:46.974929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.974946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.974959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.974974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.974987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.771 [2024-11-08 02:23:46.975401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.771 [2024-11-08 02:23:46.975428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.771 [2024-11-08 02:23:46.975454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.771 [2024-11-08 02:23:46.975494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.771 [2024-11-08 02:23:46.975508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.772 [2024-11-08 02:23:46.975520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.772 [2024-11-08 02:23:46.975546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.772 [2024-11-08 02:23:46.975572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.772 [2024-11-08 02:23:46.975598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.772 [2024-11-08 02:23:46.975628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.772 [2024-11-08 02:23:46.975655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.772 [2024-11-08 02:23:46.975686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.772 [2024-11-08 02:23:46.975712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.772 [2024-11-08 02:23:46.975738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.772 [2024-11-08 02:23:46.975764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.772 [2024-11-08 02:23:46.975791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.772 [2024-11-08 02:23:46.975817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:51.772 [2024-11-08 02:23:46.975842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.772 [2024-11-08 02:23:46.975868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.772 [2024-11-08 02:23:46.975894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.772 [2024-11-08 02:23:46.975920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.772 [2024-11-08 02:23:46.975945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.772 [2024-11-08 02:23:46.975977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.975991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.772 [2024-11-08 02:23:46.976003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.976017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:51.772 [2024-11-08 02:23:46.976029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.976042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245e3c0 is same with the state(6) to be set 00:19:51.772 [2024-11-08 02:23:46.976056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.772 [2024-11-08 02:23:46.976066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.772 [2024-11-08 02:23:46.976075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109880 len:8 PRP1 0x0 PRP2 0x0 00:19:51.772 [2024-11-08 02:23:46.976087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.976102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.772 [2024-11-08 02:23:46.976111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.772 [2024-11-08 02:23:46.976120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110336 len:8 PRP1 0x0 PRP2 0x0 00:19:51.772 [2024-11-08 02:23:46.976144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.976159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.772 [2024-11-08 02:23:46.976168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.772 [2024-11-08 02:23:46.976178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110344 len:8 PRP1 0x0 PRP2 0x0 00:19:51.772 [2024-11-08 02:23:46.976190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.976202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.772 [2024-11-08 02:23:46.976211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.772 [2024-11-08 02:23:46.976221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110352 len:8 PRP1 0x0 PRP2 0x0 00:19:51.772 [2024-11-08 02:23:46.976232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.976244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.772 [2024-11-08 02:23:46.976253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.772 [2024-11-08 02:23:46.976262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110360 len:8 PRP1 0x0 PRP2 0x0 00:19:51.772 [2024-11-08 02:23:46.976274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.772 [2024-11-08 02:23:46.976286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110368 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110376 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110384 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110392 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110400 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110408 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110416 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110424 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110432 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110440 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110448 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110456 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110464 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110472 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110480 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.976936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:51.773 [2024-11-08 02:23:46.976945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:51.773 [2024-11-08 02:23:46.976954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110488 len:8 PRP1 0x0 PRP2 0x0 00:19:51.773 [2024-11-08 02:23:46.976966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.773 [2024-11-08 02:23:46.977012] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x245e3c0 was disconnected and freed. reset controller. 00:19:51.773 [2024-11-08 02:23:46.977029] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:51.773 [2024-11-08 02:23:46.977078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.773 [2024-11-08 02:23:46.977097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.774 [2024-11-08 02:23:46.977124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.774 [2024-11-08 02:23:46.977137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.774 [2024-11-08 02:23:46.977150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.774 [2024-11-08 02:23:46.977162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.774 [2024-11-08 02:23:46.977174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.774 [2024-11-08 02:23:46.977186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.774 [2024-11-08 02:23:46.977198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:51.774 [2024-11-08 02:23:46.980644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:51.774 [2024-11-08 02:23:46.980678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fecc0 (9): Bad file descriptor 00:19:51.774 [2024-11-08 02:23:47.013891] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:51.774 10033.30 IOPS, 39.19 MiB/s [2024-11-08T02:23:53.658Z] 10062.27 IOPS, 39.31 MiB/s [2024-11-08T02:23:53.658Z] 10079.75 IOPS, 39.37 MiB/s [2024-11-08T02:23:53.658Z] 10095.77 IOPS, 39.44 MiB/s [2024-11-08T02:23:53.658Z] 10110.64 IOPS, 39.49 MiB/s [2024-11-08T02:23:53.658Z] 10118.73 IOPS, 39.53 MiB/s 00:19:51.774 Latency(us) 00:19:51.774 [2024-11-08T02:23:53.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.774 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:51.774 Verification LBA range: start 0x0 length 0x4000 00:19:51.774 NVMe0n1 : 15.01 10119.81 39.53 222.08 0.00 12348.49 532.48 13464.67 00:19:51.774 [2024-11-08T02:23:53.658Z] =================================================================================================================== 00:19:51.774 [2024-11-08T02:23:53.658Z] Total : 10119.81 39.53 222.08 0.00 12348.49 532.48 13464.67 00:19:51.774 Received shutdown signal, test time was about 15.000000 seconds 00:19:51.774 00:19:51.774 Latency(us) 00:19:51.774 [2024-11-08T02:23:53.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.774 [2024-11-08T02:23:53.658Z] =================================================================================================================== 00:19:51.774 [2024-11-08T02:23:53.658Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:51.774 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:51.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.774 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:51.774 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:51.774 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:51.774 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=90816 00:19:51.774 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 90816 /var/tmp/bdevperf.sock 00:19:51.774 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 90816 ']' 00:19:51.774 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.774 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.774 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.774 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.774 02:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:51.774 02:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.774 02:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:51.774 02:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:51.774 [2024-11-08 02:23:53.396350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:51.774 02:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:51.774 [2024-11-08 02:23:53.620462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:52.033 02:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:52.033 NVMe0n1 00:19:52.292 02:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:52.551 00:19:52.551 02:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:52.810 00:19:52.810 02:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:52.810 02:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:53.069 02:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:53.327 02:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:56.613 02:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:56.613 02:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:56.613 02:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90885 00:19:56.613 02:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:56.613 02:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 90885 00:19:57.991 { 00:19:57.991 "results": [ 00:19:57.991 { 00:19:57.991 "job": "NVMe0n1", 00:19:57.991 "core_mask": "0x1", 00:19:57.991 "workload": "verify", 00:19:57.991 "status": "finished", 00:19:57.991 "verify_range": { 00:19:57.991 "start": 0, 00:19:57.991 "length": 16384 00:19:57.991 }, 00:19:57.991 "queue_depth": 128, 00:19:57.991 "io_size": 4096, 00:19:57.991 "runtime": 1.008657, 00:19:57.991 "iops": 8344.75941772079, 00:19:57.991 "mibps": 32.59671647547184, 00:19:57.991 "io_failed": 0, 00:19:57.991 "io_timeout": 0, 00:19:57.991 "avg_latency_us": 15258.211610701288, 00:19:57.991 "min_latency_us": 1020.2763636363636, 00:19:57.991 "max_latency_us": 15192.436363636363 00:19:57.991 } 00:19:57.991 ], 00:19:57.991 "core_count": 1 00:19:57.991 } 00:19:57.991 02:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:57.991 [2024-11-08 02:23:52.940064] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:57.991 [2024-11-08 02:23:52.940162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90816 ] 00:19:57.991 [2024-11-08 02:23:53.069172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.991 [2024-11-08 02:23:53.103311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.991 [2024-11-08 02:23:53.130580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:57.991 [2024-11-08 02:23:55.026320] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:57.991 [2024-11-08 02:23:55.026433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.991 [2024-11-08 02:23:55.026457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-11-08 02:23:55.026475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.991 [2024-11-08 02:23:55.026488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-11-08 02:23:55.026501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.991 [2024-11-08 02:23:55.026529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-11-08 02:23:55.026542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.991 [2024-11-08 02:23:55.026554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.991 [2024-11-08 02:23:55.026566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.991 [2024-11-08 02:23:55.026610] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.991 [2024-11-08 02:23:55.026638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e3cc0 (9): Bad file descriptor 00:19:57.991 [2024-11-08 02:23:55.038555] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:57.991 Running I/O for 1 seconds... 00:19:57.991 8273.00 IOPS, 32.32 MiB/s 00:19:57.991 Latency(us) 00:19:57.991 [2024-11-08T02:23:59.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.991 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:57.991 Verification LBA range: start 0x0 length 0x4000 00:19:57.991 NVMe0n1 : 1.01 8344.76 32.60 0.00 0.00 15258.21 1020.28 15192.44 00:19:57.991 [2024-11-08T02:23:59.875Z] =================================================================================================================== 00:19:57.991 [2024-11-08T02:23:59.875Z] Total : 8344.76 32.60 0.00 0.00 15258.21 1020.28 15192.44 00:19:57.991 02:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:57.991 02:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:57.991 02:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:58.250 02:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:58.250 02:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:58.509 02:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:58.767 02:24:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:02.054 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:02.054 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:02.054 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 90816 00:20:02.054 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 90816 ']' 00:20:02.054 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 90816 00:20:02.054 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:02.054 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:02.054 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90816 00:20:02.054 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:02.054 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:02.054 killing process with pid 90816 00:20:02.054 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90816' 00:20:02.054 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 90816 00:20:02.054 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 90816 00:20:02.314 02:24:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:02.314 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:02.573 rmmod nvme_tcp 00:20:02.573 rmmod nvme_fabrics 00:20:02.573 rmmod nvme_keyring 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 90562 ']' 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 90562 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 90562 ']' 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 90562 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90562 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:02.573 killing process with pid 90562 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90562' 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 90562 00:20:02.573 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 90562 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:02.833 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:20:03.093 00:20:03.093 real 0m32.267s 00:20:03.093 user 2m4.168s 00:20:03.093 sys 0m5.325s 00:20:03.093 ************************************ 00:20:03.093 END TEST nvmf_failover 00:20:03.093 ************************************ 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.093 ************************************ 00:20:03.093 START TEST nvmf_host_discovery 00:20:03.093 ************************************ 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:03.093 * Looking for test storage... 00:20:03.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:20:03.093 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:03.353 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:03.353 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:03.353 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:03.353 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:03.353 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:03.353 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:03.353 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:03.353 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:03.353 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:03.354 02:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.354 --rc genhtml_branch_coverage=1 00:20:03.354 --rc genhtml_function_coverage=1 00:20:03.354 --rc genhtml_legend=1 00:20:03.354 --rc geninfo_all_blocks=1 00:20:03.354 --rc geninfo_unexecuted_blocks=1 00:20:03.354 00:20:03.354 ' 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.354 --rc genhtml_branch_coverage=1 00:20:03.354 --rc genhtml_function_coverage=1 00:20:03.354 --rc genhtml_legend=1 00:20:03.354 --rc geninfo_all_blocks=1 00:20:03.354 --rc geninfo_unexecuted_blocks=1 00:20:03.354 00:20:03.354 ' 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.354 --rc genhtml_branch_coverage=1 00:20:03.354 --rc genhtml_function_coverage=1 00:20:03.354 --rc genhtml_legend=1 00:20:03.354 --rc geninfo_all_blocks=1 00:20:03.354 --rc geninfo_unexecuted_blocks=1 00:20:03.354 00:20:03.354 ' 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:03.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.354 --rc genhtml_branch_coverage=1 00:20:03.354 --rc genhtml_function_coverage=1 00:20:03.354 --rc genhtml_legend=1 00:20:03.354 --rc geninfo_all_blocks=1 00:20:03.354 --rc geninfo_unexecuted_blocks=1 00:20:03.354 00:20:03.354 ' 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:03.354 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.354 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:03.355 Cannot find device "nvmf_init_br" 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:03.355 Cannot find device "nvmf_init_br2" 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:03.355 Cannot find device "nvmf_tgt_br" 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:03.355 Cannot find device "nvmf_tgt_br2" 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:03.355 Cannot find device "nvmf_init_br" 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:03.355 Cannot find device "nvmf_init_br2" 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:03.355 Cannot find device "nvmf_tgt_br" 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:03.355 Cannot find device "nvmf_tgt_br2" 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:03.355 Cannot find device "nvmf_br" 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:03.355 Cannot find device "nvmf_init_if" 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:03.355 Cannot find device "nvmf_init_if2" 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:03.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:03.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:03.355 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:03.614 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:03.615 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:03.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:20:03.615 00:20:03.615 --- 10.0.0.3 ping statistics --- 00:20:03.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.615 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:03.615 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:03.615 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:20:03.615 00:20:03.615 --- 10.0.0.4 ping statistics --- 00:20:03.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.615 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:03.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:03.615 00:20:03.615 --- 10.0.0.1 ping statistics --- 00:20:03.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.615 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:03.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:20:03.615 00:20:03.615 --- 10.0.0.2 ping statistics --- 00:20:03.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.615 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # return 0 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=91203 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 91203 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 91203 ']' 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.615 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:03.874 [2024-11-08 02:24:05.521195] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:03.874 [2024-11-08 02:24:05.521287] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.874 [2024-11-08 02:24:05.663065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.874 [2024-11-08 02:24:05.705762] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.874 [2024-11-08 02:24:05.705829] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.874 [2024-11-08 02:24:05.705844] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.874 [2024-11-08 02:24:05.705855] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.874 [2024-11-08 02:24:05.705864] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.874 [2024-11-08 02:24:05.705897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.874 [2024-11-08 02:24:05.740939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.134 [2024-11-08 02:24:05.835124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.134 [2024-11-08 02:24:05.847271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.134 null0 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.134 null1 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.134 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=91228 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 91228 /tmp/host.sock 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 91228 ']' 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.134 02:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.134 [2024-11-08 02:24:05.940817] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:04.134 [2024-11-08 02:24:05.940912] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91228 ] 00:20:04.393 [2024-11-08 02:24:06.082849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.393 [2024-11-08 02:24:06.123561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.393 [2024-11-08 02:24:06.155691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:04.393 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:04.653 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.913 [2024-11-08 02:24:06.580041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:04.913 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.173 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:20:05.173 02:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:20:05.432 [2024-11-08 02:24:07.230011] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:05.432 [2024-11-08 02:24:07.230038] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:05.432 [2024-11-08 02:24:07.230055] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:05.432 [2024-11-08 02:24:07.236046] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:05.432 [2024-11-08 02:24:07.292604] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:05.432 [2024-11-08 02:24:07.292626] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:05.999 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.259 02:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:06.259 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.260 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.520 [2024-11-08 02:24:08.145269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:06.520 [2024-11-08 02:24:08.145703] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:06.520 [2024-11-08 02:24:08.145729] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:06.520 [2024-11-08 02:24:08.151713] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:06.520 [2024-11-08 02:24:08.214086] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:06.520 [2024-11-08 02:24:08.214150] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:06.520 [2024-11-08 02:24:08.214159] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.520 [2024-11-08 02:24:08.370238] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:06.520 [2024-11-08 02:24:08.370265] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:06.520 [2024-11-08 02:24:08.370688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.520 [2024-11-08 02:24:08.370713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.520 [2024-11-08 02:24:08.370740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.520 [2024-11-08 02:24:08.370748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.520 [2024-11-08 02:24:08.370757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.520 [2024-11-08 02:24:08.370765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.520 [2024-11-08 02:24:08.370772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.520 [2024-11-08 02:24:08.370780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.520 [2024-11-08 02:24:08.370788] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434740 is same with the state(6) to be set 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:06.520 [2024-11-08 02:24:08.376256] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:20:06.520 [2024-11-08 02:24:08.376284] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:06.520 [2024-11-08 02:24:08.376334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1434740 (9): Bad file descriptor 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:06.520 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:06.521 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:06.521 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.521 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:06.521 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.521 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:06.780 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:06.781 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:20:06.781 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:20:06.781 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:06.781 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:06.781 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.781 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:06.781 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:06.781 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:06.781 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.040 02:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:07.977 [2024-11-08 02:24:09.800864] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:07.977 [2024-11-08 02:24:09.800887] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:07.977 [2024-11-08 02:24:09.800903] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:07.977 [2024-11-08 02:24:09.806936] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:20:08.236 [2024-11-08 02:24:09.867601] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:08.236 [2024-11-08 02:24:09.867654] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.236 request: 00:20:08.236 { 00:20:08.236 "name": "nvme", 00:20:08.236 "trtype": "tcp", 00:20:08.236 "traddr": "10.0.0.3", 00:20:08.236 "adrfam": "ipv4", 00:20:08.236 "trsvcid": "8009", 00:20:08.236 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:08.236 "wait_for_attach": true, 00:20:08.236 "method": "bdev_nvme_start_discovery", 00:20:08.236 "req_id": 1 00:20:08.236 } 00:20:08.236 Got JSON-RPC error response 00:20:08.236 response: 00:20:08.236 { 00:20:08.236 "code": -17, 00:20:08.236 "message": "File exists" 00:20:08.236 } 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.236 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.237 02:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.237 request: 00:20:08.237 { 00:20:08.237 "name": "nvme_second", 00:20:08.237 "trtype": "tcp", 00:20:08.237 "traddr": "10.0.0.3", 00:20:08.237 "adrfam": "ipv4", 00:20:08.237 "trsvcid": "8009", 00:20:08.237 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:08.237 "wait_for_attach": true, 00:20:08.237 "method": "bdev_nvme_start_discovery", 00:20:08.237 "req_id": 1 00:20:08.237 } 00:20:08.237 Got JSON-RPC error response 00:20:08.237 response: 00:20:08.237 { 00:20:08.237 "code": -17, 00:20:08.237 "message": "File exists" 00:20:08.237 } 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:08.237 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.496 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:08.496 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:08.496 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:20:08.496 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:08.496 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:08.496 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.496 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:08.496 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.496 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:08.496 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.496 02:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:09.431 [2024-11-08 02:24:11.128102] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.431 [2024-11-08 02:24:11.128365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16006d0 with addr=10.0.0.3, port=8010 00:20:09.431 [2024-11-08 02:24:11.128400] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:09.431 [2024-11-08 02:24:11.128410] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:09.431 [2024-11-08 02:24:11.128420] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:10.366 [2024-11-08 02:24:12.128076] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.366 [2024-11-08 02:24:12.128285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16006d0 with addr=10.0.0.3, port=8010 00:20:10.366 [2024-11-08 02:24:12.128321] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:10.366 [2024-11-08 02:24:12.128331] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:10.366 [2024-11-08 02:24:12.128340] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:11.301 [2024-11-08 02:24:13.128002] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:20:11.301 request: 00:20:11.301 { 00:20:11.301 "name": "nvme_second", 00:20:11.301 "trtype": "tcp", 00:20:11.301 "traddr": "10.0.0.3", 00:20:11.301 "adrfam": "ipv4", 00:20:11.301 "trsvcid": "8010", 00:20:11.301 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:11.301 "wait_for_attach": false, 00:20:11.301 "attach_timeout_ms": 3000, 00:20:11.301 "method": "bdev_nvme_start_discovery", 00:20:11.301 "req_id": 1 00:20:11.301 } 00:20:11.301 Got JSON-RPC error response 00:20:11.301 response: 00:20:11.301 { 00:20:11.301 "code": -110, 00:20:11.301 "message": "Connection timed out" 00:20:11.301 } 00:20:11.301 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:11.301 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:20:11.301 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:11.301 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:11.301 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:11.301 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:20:11.301 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:11.301 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:11.301 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.301 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:11.301 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:11.301 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.301 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 91228 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:11.559 rmmod nvme_tcp 00:20:11.559 rmmod nvme_fabrics 00:20:11.559 rmmod nvme_keyring 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 91203 ']' 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 91203 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 91203 ']' 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 91203 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91203 00:20:11.559 killing process with pid 91203 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91203' 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 91203 00:20:11.559 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 91203 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.817 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.076 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:20:12.076 00:20:12.076 real 0m8.875s 00:20:12.076 user 0m16.917s 00:20:12.076 sys 0m1.898s 00:20:12.076 ************************************ 00:20:12.076 END TEST nvmf_host_discovery 00:20:12.076 ************************************ 00:20:12.076 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:12.076 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:12.076 02:24:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:12.076 02:24:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:12.076 02:24:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:12.076 02:24:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.076 ************************************ 00:20:12.076 START TEST nvmf_host_multipath_status 00:20:12.076 ************************************ 00:20:12.076 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:12.076 * Looking for test storage... 00:20:12.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:12.076 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:12.076 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:20:12.076 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:12.076 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:12.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.077 --rc genhtml_branch_coverage=1 00:20:12.077 --rc genhtml_function_coverage=1 00:20:12.077 --rc genhtml_legend=1 00:20:12.077 --rc geninfo_all_blocks=1 00:20:12.077 --rc geninfo_unexecuted_blocks=1 00:20:12.077 00:20:12.077 ' 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:12.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.077 --rc genhtml_branch_coverage=1 00:20:12.077 --rc genhtml_function_coverage=1 00:20:12.077 --rc genhtml_legend=1 00:20:12.077 --rc geninfo_all_blocks=1 00:20:12.077 --rc geninfo_unexecuted_blocks=1 00:20:12.077 00:20:12.077 ' 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:12.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.077 --rc genhtml_branch_coverage=1 00:20:12.077 --rc genhtml_function_coverage=1 00:20:12.077 --rc genhtml_legend=1 00:20:12.077 --rc geninfo_all_blocks=1 00:20:12.077 --rc geninfo_unexecuted_blocks=1 00:20:12.077 00:20:12.077 ' 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:12.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:12.077 --rc genhtml_branch_coverage=1 00:20:12.077 --rc genhtml_function_coverage=1 00:20:12.077 --rc genhtml_legend=1 00:20:12.077 --rc geninfo_all_blocks=1 00:20:12.077 --rc geninfo_unexecuted_blocks=1 00:20:12.077 00:20:12.077 ' 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.077 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:12.336 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:12.336 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:12.337 Cannot find device "nvmf_init_br" 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:20:12.337 02:24:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:12.337 Cannot find device "nvmf_init_br2" 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:12.337 Cannot find device "nvmf_tgt_br" 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:12.337 Cannot find device "nvmf_tgt_br2" 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:12.337 Cannot find device "nvmf_init_br" 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:12.337 Cannot find device "nvmf_init_br2" 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:12.337 Cannot find device "nvmf_tgt_br" 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:12.337 Cannot find device "nvmf_tgt_br2" 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:12.337 Cannot find device "nvmf_br" 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:12.337 Cannot find device "nvmf_init_if" 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:12.337 Cannot find device "nvmf_init_if2" 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:12.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:12.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:12.337 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:12.595 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:12.595 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:12.596 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:12.596 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:20:12.596 00:20:12.596 --- 10.0.0.3 ping statistics --- 00:20:12.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.596 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:12.596 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:12.596 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:20:12.596 00:20:12.596 --- 10.0.0.4 ping statistics --- 00:20:12.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.596 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:12.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:12.596 00:20:12.596 --- 10.0.0.1 ping statistics --- 00:20:12.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.596 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:12.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:20:12.596 00:20:12.596 --- 10.0.0.2 ping statistics --- 00:20:12.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.596 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # return 0 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=91720 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 91720 00:20:12.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 91720 ']' 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:12.596 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:12.855 [2024-11-08 02:24:14.483286] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:12.855 [2024-11-08 02:24:14.483378] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.855 [2024-11-08 02:24:14.625143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:12.855 [2024-11-08 02:24:14.668960] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.855 [2024-11-08 02:24:14.669022] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.855 [2024-11-08 02:24:14.669037] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.855 [2024-11-08 02:24:14.669047] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.855 [2024-11-08 02:24:14.669056] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.855 [2024-11-08 02:24:14.669225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.855 [2024-11-08 02:24:14.669293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.855 [2024-11-08 02:24:14.703963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:13.113 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:13.113 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:20:13.113 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:13.113 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:13.113 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:13.113 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.113 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=91720 00:20:13.113 02:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:13.371 [2024-11-08 02:24:15.090702] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.371 02:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:13.629 Malloc0 00:20:13.629 02:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:13.886 02:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.164 02:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:14.421 [2024-11-08 02:24:16.197197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:14.421 02:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:14.680 [2024-11-08 02:24:16.481278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:14.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.680 02:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=91768 00:20:14.680 02:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:14.680 02:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:14.680 02:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 91768 /var/tmp/bdevperf.sock 00:20:14.680 02:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 91768 ']' 00:20:14.680 02:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.680 02:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:14.680 02:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.680 02:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:14.680 02:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:15.615 02:24:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:15.615 02:24:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:20:15.615 02:24:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:15.872 02:24:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:16.439 Nvme0n1 00:20:16.439 02:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:16.696 Nvme0n1 00:20:16.696 02:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:20:16.696 02:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:18.597 02:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:20:18.597 02:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:18.856 02:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:19.116 02:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:20:20.208 02:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:20:20.208 02:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:20.208 02:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.208 02:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:20.467 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.467 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:20.467 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:20.467 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.725 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:20.725 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:20.725 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:20.725 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.984 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.984 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:20.984 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.984 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:21.243 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:21.243 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:21.243 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.243 02:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:21.500 02:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:21.500 02:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:21.500 02:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.500 02:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:21.759 02:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:21.759 02:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:20:21.759 02:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:22.018 02:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:22.277 02:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:23.213 02:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:23.213 02:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:23.213 02:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.213 02:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:23.473 02:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:23.473 02:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:23.473 02:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.473 02:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:23.732 02:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.732 02:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:23.732 02:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.732 02:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:23.991 02:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.991 02:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:23.991 02:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.991 02:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:24.249 02:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:24.249 02:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:24.249 02:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.249 02:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:24.508 02:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:24.508 02:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:24.508 02:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:24.508 02:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.766 02:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:24.766 02:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:24.766 02:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:25.025 02:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:25.425 02:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:26.360 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:26.360 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:26.360 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.360 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:26.619 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.619 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:26.619 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.619 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:26.878 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:26.878 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:26.878 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.878 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:27.136 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:27.136 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:27.136 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.136 02:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:27.395 02:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:27.395 02:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:27.395 02:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:27.395 02:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.653 02:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:27.653 02:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:27.653 02:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.654 02:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:27.913 02:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:27.913 02:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:27.913 02:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:28.173 02:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:28.431 02:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:29.367 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:29.367 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:29.367 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.367 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:29.626 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.626 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:29.626 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:29.626 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.885 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:29.885 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:29.885 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.885 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:30.145 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:30.145 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:30.145 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:30.145 02:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:30.403 02:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:30.403 02:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:30.403 02:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:30.403 02:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:30.663 02:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:30.663 02:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:30.663 02:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:30.663 02:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:30.922 02:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:30.922 02:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:30.922 02:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:31.181 02:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:31.440 02:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:32.377 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:32.377 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:32.377 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:32.377 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:32.636 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:32.636 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:32.636 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:32.636 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:32.894 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:32.894 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:32.894 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:32.894 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:33.153 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:33.153 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:33.153 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:33.153 02:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.411 02:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:33.411 02:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:33.411 02:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:33.411 02:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.670 02:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:33.670 02:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:33.670 02:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:33.670 02:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.929 02:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:33.929 02:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:33.929 02:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:34.188 02:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:34.447 02:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:35.383 02:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:35.383 02:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:35.383 02:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:35.383 02:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:35.642 02:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:35.642 02:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:35.642 02:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:35.642 02:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:35.901 02:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:35.901 02:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:35.901 02:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:35.901 02:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:36.160 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.160 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:36.160 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.160 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:36.418 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.418 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:36.419 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.419 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:36.678 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:36.678 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:36.678 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:36.678 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.937 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.937 02:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:37.195 02:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:37.195 02:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:37.453 02:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:37.712 02:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:39.087 02:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:39.087 02:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:39.087 02:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:39.087 02:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.087 02:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:39.087 02:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:39.087 02:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.087 02:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:39.346 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:39.346 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:39.346 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:39.346 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.606 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:39.606 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:39.606 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:39.606 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.865 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:39.865 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:39.865 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:39.865 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:40.124 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:40.124 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:40.124 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:40.124 02:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:40.383 02:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:40.383 02:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:40.383 02:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:40.646 02:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:40.904 02:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:41.840 02:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:41.840 02:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:41.840 02:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:41.840 02:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:42.099 02:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:42.099 02:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:42.099 02:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:42.099 02:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:42.358 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:42.358 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:42.358 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:42.358 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:42.617 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:42.617 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:42.617 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:42.617 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:42.876 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:42.876 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:42.876 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:42.876 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:43.160 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:43.160 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:43.160 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:43.160 02:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:43.435 02:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:43.435 02:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:43.435 02:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:43.699 02:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:43.958 02:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:44.895 02:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:44.895 02:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:44.895 02:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:44.895 02:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:45.153 02:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:45.153 02:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:45.153 02:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:45.153 02:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:45.412 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:45.412 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:45.412 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:45.412 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:45.671 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:45.671 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:45.671 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:45.671 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:45.930 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:45.930 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:45.930 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:45.930 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:46.189 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:46.189 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:46.189 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:46.189 02:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:46.448 02:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:46.448 02:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:46.448 02:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:46.707 02:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:46.966 02:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:47.900 02:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:47.900 02:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:47.900 02:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:47.900 02:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:48.159 02:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:48.159 02:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:48.159 02:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:48.159 02:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:48.418 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:48.418 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:48.418 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:48.418 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:48.676 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:48.676 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:48.676 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:48.676 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:48.935 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:48.935 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:48.935 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:48.935 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:49.194 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:49.194 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:49.194 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:49.194 02:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:49.453 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:49.453 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 91768 00:20:49.453 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 91768 ']' 00:20:49.453 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 91768 00:20:49.453 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:49.453 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:49.453 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91768 00:20:49.453 killing process with pid 91768 00:20:49.453 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:49.453 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:49.453 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91768' 00:20:49.453 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 91768 00:20:49.453 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 91768 00:20:49.453 { 00:20:49.453 "results": [ 00:20:49.453 { 00:20:49.453 "job": "Nvme0n1", 00:20:49.453 "core_mask": "0x4", 00:20:49.453 "workload": "verify", 00:20:49.453 "status": "terminated", 00:20:49.453 "verify_range": { 00:20:49.453 "start": 0, 00:20:49.453 "length": 16384 00:20:49.453 }, 00:20:49.453 "queue_depth": 128, 00:20:49.453 "io_size": 4096, 00:20:49.453 "runtime": 32.732392, 00:20:49.453 "iops": 9536.54716098964, 00:20:49.453 "mibps": 37.25213734761578, 00:20:49.453 "io_failed": 0, 00:20:49.453 "io_timeout": 0, 00:20:49.453 "avg_latency_us": 13395.00082110986, 00:20:49.453 "min_latency_us": 103.33090909090909, 00:20:49.453 "max_latency_us": 4026531.84 00:20:49.453 } 00:20:49.453 ], 00:20:49.453 "core_count": 1 00:20:49.453 } 00:20:49.715 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 91768 00:20:49.715 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:49.715 [2024-11-08 02:24:16.557122] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:49.715 [2024-11-08 02:24:16.557242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91768 ] 00:20:49.715 [2024-11-08 02:24:16.698832] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.715 [2024-11-08 02:24:16.731418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.715 [2024-11-08 02:24:16.758655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:49.715 [2024-11-08 02:24:18.378766] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:20:49.715 Running I/O for 90 seconds... 00:20:49.715 7956.00 IOPS, 31.08 MiB/s [2024-11-08T02:24:51.599Z] 8010.00 IOPS, 31.29 MiB/s [2024-11-08T02:24:51.599Z] 7985.67 IOPS, 31.19 MiB/s [2024-11-08T02:24:51.599Z] 7973.00 IOPS, 31.14 MiB/s [2024-11-08T02:24:51.599Z] 7940.20 IOPS, 31.02 MiB/s [2024-11-08T02:24:51.599Z] 8320.83 IOPS, 32.50 MiB/s [2024-11-08T02:24:51.599Z] 8643.86 IOPS, 33.77 MiB/s [2024-11-08T02:24:51.599Z] 8864.62 IOPS, 34.63 MiB/s [2024-11-08T02:24:51.599Z] 9088.33 IOPS, 35.50 MiB/s [2024-11-08T02:24:51.599Z] 9255.60 IOPS, 36.15 MiB/s [2024-11-08T02:24:51.599Z] 9379.18 IOPS, 36.64 MiB/s [2024-11-08T02:24:51.599Z] 9487.67 IOPS, 37.06 MiB/s [2024-11-08T02:24:51.599Z] 9591.08 IOPS, 37.47 MiB/s [2024-11-08T02:24:51.599Z] 9664.21 IOPS, 37.75 MiB/s [2024-11-08T02:24:51.599Z] [2024-11-08 02:24:32.848166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.715 [2024-11-08 02:24:32.848224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:49.715 [2024-11-08 02:24:32.848290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.715 [2024-11-08 02:24:32.848308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:49.715 [2024-11-08 02:24:32.848328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.715 [2024-11-08 02:24:32.848342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.848373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.848405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.848436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.848467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.848498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.848550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.848584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.848615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.848646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.848677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.848707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.848738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.848770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.716 [2024-11-08 02:24:32.848802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.716 [2024-11-08 02:24:32.848834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.716 [2024-11-08 02:24:32.848865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.716 [2024-11-08 02:24:32.848896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.716 [2024-11-08 02:24:32.848935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.716 [2024-11-08 02:24:32.848969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.848987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.716 [2024-11-08 02:24:32.849000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.716 [2024-11-08 02:24:32.849033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.716 [2024-11-08 02:24:32.849667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.716 [2024-11-08 02:24:32.849698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:49.716 [2024-11-08 02:24:32.849717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.849730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.849748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.849761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.849780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.849793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.849811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.849824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.849849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.849863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.849881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.849894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.849913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.849927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.849950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.849965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.849983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.849997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.717 [2024-11-08 02:24:32.850590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.850621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.850663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.850698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.850729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.850761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.850793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.850824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.850856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.850887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.850946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.850967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.850981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:49.717 [2024-11-08 02:24:32.851001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.717 [2024-11-08 02:24:32.851015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.718 [2024-11-08 02:24:32.851050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.718 [2024-11-08 02:24:32.851090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.718 [2024-11-08 02:24:32.851137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.718 [2024-11-08 02:24:32.851173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.851707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.718 [2024-11-08 02:24:32.851739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.718 [2024-11-08 02:24:32.851771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.718 [2024-11-08 02:24:32.851802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.718 [2024-11-08 02:24:32.851834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.718 [2024-11-08 02:24:32.851866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.718 [2024-11-08 02:24:32.851897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.851917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.718 [2024-11-08 02:24:32.851930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.852565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.718 [2024-11-08 02:24:32.852591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.852632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.852649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.852674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.852688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.852712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.852726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.852751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.852765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.852789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.852803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.852828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.852845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.852870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.852884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.852924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.852943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.852969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.852982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.853007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.853020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.853045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.853058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.853083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.853097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:49.718 [2024-11-08 02:24:32.853136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.718 [2024-11-08 02:24:32.853160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:32.853186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:32.853201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:32.853226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:32.853239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:32.853267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:32.853282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:32.853307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:32.853320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:32.853346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:32.853359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:32.853384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:32.853397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:32.853422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:32.853435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:32.853460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:32.853474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:49.719 9233.33 IOPS, 36.07 MiB/s [2024-11-08T02:24:51.603Z] 8656.25 IOPS, 33.81 MiB/s [2024-11-08T02:24:51.603Z] 8147.06 IOPS, 31.82 MiB/s [2024-11-08T02:24:51.603Z] 7694.44 IOPS, 30.06 MiB/s [2024-11-08T02:24:51.603Z] 7679.05 IOPS, 30.00 MiB/s [2024-11-08T02:24:51.603Z] 7817.85 IOPS, 30.54 MiB/s [2024-11-08T02:24:51.603Z] 7994.10 IOPS, 31.23 MiB/s [2024-11-08T02:24:51.603Z] 8280.45 IOPS, 32.35 MiB/s [2024-11-08T02:24:51.603Z] 8527.48 IOPS, 33.31 MiB/s [2024-11-08T02:24:51.603Z] 8744.79 IOPS, 34.16 MiB/s [2024-11-08T02:24:51.603Z] 8826.28 IOPS, 34.48 MiB/s [2024-11-08T02:24:51.603Z] 8891.50 IOPS, 34.73 MiB/s [2024-11-08T02:24:51.603Z] 8948.85 IOPS, 34.96 MiB/s [2024-11-08T02:24:51.603Z] 9116.93 IOPS, 35.61 MiB/s [2024-11-08T02:24:51.603Z] 9274.55 IOPS, 36.23 MiB/s [2024-11-08T02:24:51.603Z] 9432.43 IOPS, 36.85 MiB/s [2024-11-08T02:24:51.603Z] [2024-11-08 02:24:48.661950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.662004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.662066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.662084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.662194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.719 [2024-11-08 02:24:48.662211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.662231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.719 [2024-11-08 02:24:48.662244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.662280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.719 [2024-11-08 02:24:48.662294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.719 [2024-11-08 02:24:48.664242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.719 [2024-11-08 02:24:48.664284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.664316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.664347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.664378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.664408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.664439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.664469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.664500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.719 [2024-11-08 02:24:48.664546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.719 [2024-11-08 02:24:48.664580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.719 [2024-11-08 02:24:48.664611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.719 [2024-11-08 02:24:48.664642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.664673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.664704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.664735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.664765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.719 [2024-11-08 02:24:48.664796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.719 [2024-11-08 02:24:48.664827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.719 [2024-11-08 02:24:48.664858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.719 [2024-11-08 02:24:48.664891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.664931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.719 [2024-11-08 02:24:48.664964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:49.719 [2024-11-08 02:24:48.664982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.720 [2024-11-08 02:24:48.664995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.720 [2024-11-08 02:24:48.665026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.720 [2024-11-08 02:24:48.665057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.720 [2024-11-08 02:24:48.665087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.720 [2024-11-08 02:24:48.665134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.720 [2024-11-08 02:24:48.665166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.720 [2024-11-08 02:24:48.665197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.720 [2024-11-08 02:24:48.665228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.720 [2024-11-08 02:24:48.665260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.720 [2024-11-08 02:24:48.665290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.720 [2024-11-08 02:24:48.665322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.720 [2024-11-08 02:24:48.665363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.720 [2024-11-08 02:24:48.665394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.720 [2024-11-08 02:24:48.665425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.720 [2024-11-08 02:24:48.665457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.720 [2024-11-08 02:24:48.665488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.720 [2024-11-08 02:24:48.665519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:49.720 [2024-11-08 02:24:48.665538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:49.720 [2024-11-08 02:24:48.665550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:49.720 9489.35 IOPS, 37.07 MiB/s [2024-11-08T02:24:51.604Z] 9521.06 IOPS, 37.19 MiB/s [2024-11-08T02:24:51.604Z] Received shutdown signal, test time was about 32.733142 seconds 00:20:49.720 00:20:49.720 Latency(us) 00:20:49.720 [2024-11-08T02:24:51.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.720 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:49.720 Verification LBA range: start 0x0 length 0x4000 00:20:49.720 Nvme0n1 : 32.73 9536.55 37.25 0.00 0.00 13395.00 103.33 4026531.84 00:20:49.720 [2024-11-08T02:24:51.604Z] =================================================================================================================== 00:20:49.720 [2024-11-08T02:24:51.604Z] Total : 9536.55 37.25 0.00 0.00 13395.00 103.33 4026531.84 00:20:49.720 [2024-11-08 02:24:51.278308] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:20:49.720 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.979 rmmod nvme_tcp 00:20:49.979 rmmod nvme_fabrics 00:20:49.979 rmmod nvme_keyring 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 91720 ']' 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 91720 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 91720 ']' 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 91720 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91720 00:20:49.979 killing process with pid 91720 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91720' 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 91720 00:20:49.979 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 91720 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:50.238 02:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:50.238 02:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:50.238 02:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:50.238 02:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:50.238 02:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:50.238 02:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.238 02:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:50.498 ************************************ 00:20:50.498 END TEST nvmf_host_multipath_status 00:20:50.498 ************************************ 00:20:50.498 00:20:50.498 real 0m38.395s 00:20:50.498 user 2m4.261s 00:20:50.498 sys 0m10.849s 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.498 ************************************ 00:20:50.498 START TEST nvmf_discovery_remove_ifc 00:20:50.498 ************************************ 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:50.498 * Looking for test storage... 00:20:50.498 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:20:50.498 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.757 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:50.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.758 --rc genhtml_branch_coverage=1 00:20:50.758 --rc genhtml_function_coverage=1 00:20:50.758 --rc genhtml_legend=1 00:20:50.758 --rc geninfo_all_blocks=1 00:20:50.758 --rc geninfo_unexecuted_blocks=1 00:20:50.758 00:20:50.758 ' 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:50.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.758 --rc genhtml_branch_coverage=1 00:20:50.758 --rc genhtml_function_coverage=1 00:20:50.758 --rc genhtml_legend=1 00:20:50.758 --rc geninfo_all_blocks=1 00:20:50.758 --rc geninfo_unexecuted_blocks=1 00:20:50.758 00:20:50.758 ' 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:50.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.758 --rc genhtml_branch_coverage=1 00:20:50.758 --rc genhtml_function_coverage=1 00:20:50.758 --rc genhtml_legend=1 00:20:50.758 --rc geninfo_all_blocks=1 00:20:50.758 --rc geninfo_unexecuted_blocks=1 00:20:50.758 00:20:50.758 ' 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:50.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.758 --rc genhtml_branch_coverage=1 00:20:50.758 --rc genhtml_function_coverage=1 00:20:50.758 --rc genhtml_legend=1 00:20:50.758 --rc geninfo_all_blocks=1 00:20:50.758 --rc geninfo_unexecuted_blocks=1 00:20:50.758 00:20:50.758 ' 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:50.758 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:50.758 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:50.759 Cannot find device "nvmf_init_br" 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:50.759 Cannot find device "nvmf_init_br2" 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:50.759 Cannot find device "nvmf_tgt_br" 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.759 Cannot find device "nvmf_tgt_br2" 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:50.759 Cannot find device "nvmf_init_br" 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:50.759 Cannot find device "nvmf_init_br2" 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:50.759 Cannot find device "nvmf_tgt_br" 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:50.759 Cannot find device "nvmf_tgt_br2" 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:50.759 Cannot find device "nvmf_br" 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:50.759 Cannot find device "nvmf_init_if" 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:50.759 Cannot find device "nvmf_init_if2" 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:50.759 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:51.017 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:51.017 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:20:51.017 00:20:51.017 --- 10.0.0.3 ping statistics --- 00:20:51.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.017 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:51.017 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:51.017 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:20:51.017 00:20:51.017 --- 10.0.0.4 ping statistics --- 00:20:51.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.017 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:51.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:51.017 00:20:51.017 --- 10.0.0.1 ping statistics --- 00:20:51.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.017 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:51.017 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:51.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:20:51.018 00:20:51.018 --- 10.0.0.2 ping statistics --- 00:20:51.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.018 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # return 0 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=92598 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 92598 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 92598 ']' 00:20:51.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:51.018 02:24:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:51.279 [2024-11-08 02:24:52.905704] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:51.279 [2024-11-08 02:24:52.906606] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.279 [2024-11-08 02:24:53.045756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.279 [2024-11-08 02:24:53.076408] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.279 [2024-11-08 02:24:53.076458] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.279 [2024-11-08 02:24:53.076484] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.279 [2024-11-08 02:24:53.076505] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.279 [2024-11-08 02:24:53.076510] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.279 [2024-11-08 02:24:53.076533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.279 [2024-11-08 02:24:53.101969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:52.217 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:52.217 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:52.217 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:52.217 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:52.217 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:52.217 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.217 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:52.218 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.218 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:52.218 [2024-11-08 02:24:53.900547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.218 [2024-11-08 02:24:53.908623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:52.218 null0 00:20:52.218 [2024-11-08 02:24:53.940545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:52.218 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:52.218 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.218 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=92630 00:20:52.218 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:52.218 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 92630 /tmp/host.sock 00:20:52.218 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 92630 ']' 00:20:52.218 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:20:52.218 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:52.218 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:52.218 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:52.218 02:24:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:52.218 [2024-11-08 02:24:54.009587] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:52.218 [2024-11-08 02:24:54.010001] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92630 ] 00:20:52.477 [2024-11-08 02:24:54.143125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.477 [2024-11-08 02:24:54.181613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:52.477 [2024-11-08 02:24:54.329880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.477 02:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:53.854 [2024-11-08 02:24:55.363229] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:53.854 [2024-11-08 02:24:55.363459] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:53.854 [2024-11-08 02:24:55.363492] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:53.854 [2024-11-08 02:24:55.369272] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:53.854 [2024-11-08 02:24:55.425533] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:53.854 [2024-11-08 02:24:55.425742] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:53.854 [2024-11-08 02:24:55.425810] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:53.854 [2024-11-08 02:24:55.425920] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:53.854 [2024-11-08 02:24:55.425991] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:53.854 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.854 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:53.854 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:53.854 [2024-11-08 02:24:55.432129] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2531290 was disconnected and freed. delete nvme_qpair. 00:20:53.854 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:53.854 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:53.854 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.854 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:53.854 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:53.854 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:53.854 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.854 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:53.854 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:53.855 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:53.855 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:53.855 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:53.855 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:53.855 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.855 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:53.855 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:53.855 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:53.855 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:53.855 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.855 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:53.855 02:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:54.791 02:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:54.791 02:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:54.791 02:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:54.791 02:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:54.791 02:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.791 02:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:54.791 02:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:54.791 02:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.791 02:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:54.791 02:24:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:56.169 02:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:56.169 02:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:56.169 02:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:56.169 02:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.169 02:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:56.169 02:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:56.169 02:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:56.169 02:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.169 02:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:56.169 02:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:57.106 02:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:57.106 02:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:57.106 02:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:57.106 02:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.106 02:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:57.106 02:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:57.106 02:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:57.106 02:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.106 02:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:57.106 02:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:58.042 02:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:58.042 02:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:58.042 02:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.042 02:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:58.042 02:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:58.042 02:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:58.042 02:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:58.042 02:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.042 02:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:58.042 02:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:58.978 02:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:58.978 02:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:58.978 02:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:58.978 02:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.978 02:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:58.978 02:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:58.978 02:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:58.978 02:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.978 02:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:58.978 02:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:58.978 [2024-11-08 02:25:00.853995] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:58.978 [2024-11-08 02:25:00.854069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.978 [2024-11-08 02:25:00.854084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.978 [2024-11-08 02:25:00.854095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.978 [2024-11-08 02:25:00.854103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.978 [2024-11-08 02:25:00.854127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.978 [2024-11-08 02:25:00.854165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.979 [2024-11-08 02:25:00.854176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.979 [2024-11-08 02:25:00.854185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.979 [2024-11-08 02:25:00.854195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:58.979 [2024-11-08 02:25:00.854204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:58.979 [2024-11-08 02:25:00.854213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250cd00 is same with the state(6) to be set 00:20:59.238 [2024-11-08 02:25:00.863993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250cd00 (9): Bad file descriptor 00:20:59.238 [2024-11-08 02:25:00.874010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:00.174 02:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:00.174 02:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:00.174 02:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:00.174 02:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.174 02:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:00.174 02:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:00.174 02:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:00.174 [2024-11-08 02:25:01.912207] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:21:00.174 [2024-11-08 02:25:01.912430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x250cd00 with addr=10.0.0.3, port=4420 00:21:00.174 [2024-11-08 02:25:01.912702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x250cd00 is same with the state(6) to be set 00:21:00.174 [2024-11-08 02:25:01.912986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x250cd00 (9): Bad file descriptor 00:21:00.174 [2024-11-08 02:25:01.913733] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:00.174 [2024-11-08 02:25:01.913947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:00.174 [2024-11-08 02:25:01.913968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:00.174 [2024-11-08 02:25:01.913982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:00.174 [2024-11-08 02:25:01.914007] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:00.174 [2024-11-08 02:25:01.914024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:00.174 02:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.174 02:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:00.174 02:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:01.111 [2024-11-08 02:25:02.914060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:01.111 [2024-11-08 02:25:02.914094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:01.111 [2024-11-08 02:25:02.914163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:01.111 [2024-11-08 02:25:02.914173] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:21:01.111 [2024-11-08 02:25:02.914192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.111 [2024-11-08 02:25:02.914218] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:21:01.111 [2024-11-08 02:25:02.914265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.111 [2024-11-08 02:25:02.914280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.111 [2024-11-08 02:25:02.914294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.111 [2024-11-08 02:25:02.914303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.111 [2024-11-08 02:25:02.914312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.111 [2024-11-08 02:25:02.914320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.111 [2024-11-08 02:25:02.914329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.111 [2024-11-08 02:25:02.914337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.111 [2024-11-08 02:25:02.914347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.111 [2024-11-08 02:25:02.914355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.111 [2024-11-08 02:25:02.914363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:01.111 [2024-11-08 02:25:02.914908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fb2a0 (9): Bad file descriptor 00:21:01.111 [2024-11-08 02:25:02.915921] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:01.111 [2024-11-08 02:25:02.915940] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:01.111 02:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:01.111 02:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:01.111 02:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:01.111 02:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:01.111 02:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.111 02:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:01.111 02:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:01.111 02:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.111 02:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:01.111 02:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:01.370 02:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:01.370 02:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:01.370 02:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:01.370 02:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:01.370 02:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.370 02:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:01.370 02:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:01.370 02:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:01.370 02:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:01.370 02:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.370 02:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:01.370 02:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:02.307 02:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:02.307 02:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:02.307 02:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.307 02:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:02.307 02:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:02.307 02:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:02.307 02:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:02.307 02:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.307 02:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:02.307 02:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:03.243 [2024-11-08 02:25:04.927172] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:03.243 [2024-11-08 02:25:04.927195] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:03.243 [2024-11-08 02:25:04.927213] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:03.243 [2024-11-08 02:25:04.933219] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:21:03.243 [2024-11-08 02:25:04.989387] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:03.243 [2024-11-08 02:25:04.989655] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:03.243 [2024-11-08 02:25:04.989720] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:03.243 [2024-11-08 02:25:04.989897] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:21:03.243 [2024-11-08 02:25:04.989962] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:03.243 [2024-11-08 02:25:04.995962] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24e8920 was disconnected and freed. delete nvme_qpair. 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 92630 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 92630 ']' 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 92630 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92630 00:21:03.503 killing process with pid 92630 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92630' 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 92630 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 92630 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:03.503 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:03.762 rmmod nvme_tcp 00:21:03.762 rmmod nvme_fabrics 00:21:03.762 rmmod nvme_keyring 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 92598 ']' 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 92598 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 92598 ']' 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 92598 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92598 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:03.762 killing process with pid 92598 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92598' 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 92598 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 92598 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:03.762 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:21:04.021 00:21:04.021 real 0m13.655s 00:21:04.021 user 0m22.992s 00:21:04.021 sys 0m2.367s 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:04.021 02:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:04.021 ************************************ 00:21:04.021 END TEST nvmf_discovery_remove_ifc 00:21:04.021 ************************************ 00:21:04.282 02:25:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:04.282 02:25:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:04.282 02:25:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:04.282 02:25:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.282 ************************************ 00:21:04.282 START TEST nvmf_identify_kernel_target 00:21:04.282 ************************************ 00:21:04.282 02:25:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:04.282 * Looking for test storage... 00:21:04.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:04.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.282 --rc genhtml_branch_coverage=1 00:21:04.282 --rc genhtml_function_coverage=1 00:21:04.282 --rc genhtml_legend=1 00:21:04.282 --rc geninfo_all_blocks=1 00:21:04.282 --rc geninfo_unexecuted_blocks=1 00:21:04.282 00:21:04.282 ' 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:04.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.282 --rc genhtml_branch_coverage=1 00:21:04.282 --rc genhtml_function_coverage=1 00:21:04.282 --rc genhtml_legend=1 00:21:04.282 --rc geninfo_all_blocks=1 00:21:04.282 --rc geninfo_unexecuted_blocks=1 00:21:04.282 00:21:04.282 ' 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:04.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.282 --rc genhtml_branch_coverage=1 00:21:04.282 --rc genhtml_function_coverage=1 00:21:04.282 --rc genhtml_legend=1 00:21:04.282 --rc geninfo_all_blocks=1 00:21:04.282 --rc geninfo_unexecuted_blocks=1 00:21:04.282 00:21:04.282 ' 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:04.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.282 --rc genhtml_branch_coverage=1 00:21:04.282 --rc genhtml_function_coverage=1 00:21:04.282 --rc genhtml_legend=1 00:21:04.282 --rc geninfo_all_blocks=1 00:21:04.282 --rc geninfo_unexecuted_blocks=1 00:21:04.282 00:21:04.282 ' 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.282 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.283 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:04.283 Cannot find device "nvmf_init_br" 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:21:04.283 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:04.543 Cannot find device "nvmf_init_br2" 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:04.543 Cannot find device "nvmf_tgt_br" 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:04.543 Cannot find device "nvmf_tgt_br2" 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:04.543 Cannot find device "nvmf_init_br" 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:04.543 Cannot find device "nvmf_init_br2" 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:04.543 Cannot find device "nvmf_tgt_br" 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:04.543 Cannot find device "nvmf_tgt_br2" 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:04.543 Cannot find device "nvmf_br" 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:04.543 Cannot find device "nvmf_init_if" 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:04.543 Cannot find device "nvmf_init_if2" 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:04.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:04.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:04.543 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:04.802 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:04.802 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:04.802 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:04.802 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:04.802 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:04.802 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:04.802 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:04.802 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:04.802 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:04.802 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:04.802 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:21:04.802 00:21:04.802 --- 10.0.0.3 ping statistics --- 00:21:04.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.802 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:04.802 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:04.802 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:04.802 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:21:04.802 00:21:04.802 --- 10.0.0.4 ping statistics --- 00:21:04.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.802 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:04.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:21:04.803 00:21:04.803 --- 10.0.0.1 ping statistics --- 00:21:04.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.803 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:04.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:21:04.803 00:21:04.803 --- 10.0.0.2 ping statistics --- 00:21:04.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.803 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # return 0 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:04.803 02:25:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:05.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:05.062 Waiting for block devices as requested 00:21:05.062 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:05.321 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:05.321 No valid GPT data, bailing 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:05.321 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:05.580 No valid GPT data, bailing 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:05.580 No valid GPT data, bailing 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:05.580 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:05.581 No valid GPT data, bailing 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:05.581 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -a 10.0.0.1 -t tcp -s 4420 00:21:05.840 00:21:05.840 Discovery Log Number of Records 2, Generation counter 2 00:21:05.840 =====Discovery Log Entry 0====== 00:21:05.840 trtype: tcp 00:21:05.840 adrfam: ipv4 00:21:05.840 subtype: current discovery subsystem 00:21:05.840 treq: not specified, sq flow control disable supported 00:21:05.840 portid: 1 00:21:05.840 trsvcid: 4420 00:21:05.840 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:05.840 traddr: 10.0.0.1 00:21:05.840 eflags: none 00:21:05.840 sectype: none 00:21:05.840 =====Discovery Log Entry 1====== 00:21:05.840 trtype: tcp 00:21:05.840 adrfam: ipv4 00:21:05.840 subtype: nvme subsystem 00:21:05.840 treq: not specified, sq flow control disable supported 00:21:05.840 portid: 1 00:21:05.840 trsvcid: 4420 00:21:05.840 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:05.840 traddr: 10.0.0.1 00:21:05.840 eflags: none 00:21:05.840 sectype: none 00:21:05.840 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:21:05.840 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:05.840 ===================================================== 00:21:05.840 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:05.840 ===================================================== 00:21:05.840 Controller Capabilities/Features 00:21:05.840 ================================ 00:21:05.840 Vendor ID: 0000 00:21:05.840 Subsystem Vendor ID: 0000 00:21:05.840 Serial Number: 9d2676a2630437525e1e 00:21:05.840 Model Number: Linux 00:21:05.840 Firmware Version: 6.8.9-20 00:21:05.840 Recommended Arb Burst: 0 00:21:05.840 IEEE OUI Identifier: 00 00 00 00:21:05.840 Multi-path I/O 00:21:05.840 May have multiple subsystem ports: No 00:21:05.840 May have multiple controllers: No 00:21:05.840 Associated with SR-IOV VF: No 00:21:05.840 Max Data Transfer Size: Unlimited 00:21:05.840 Max Number of Namespaces: 0 00:21:05.840 Max Number of I/O Queues: 1024 00:21:05.840 NVMe Specification Version (VS): 1.3 00:21:05.840 NVMe Specification Version (Identify): 1.3 00:21:05.840 Maximum Queue Entries: 1024 00:21:05.840 Contiguous Queues Required: No 00:21:05.840 Arbitration Mechanisms Supported 00:21:05.840 Weighted Round Robin: Not Supported 00:21:05.840 Vendor Specific: Not Supported 00:21:05.840 Reset Timeout: 7500 ms 00:21:05.840 Doorbell Stride: 4 bytes 00:21:05.840 NVM Subsystem Reset: Not Supported 00:21:05.840 Command Sets Supported 00:21:05.840 NVM Command Set: Supported 00:21:05.840 Boot Partition: Not Supported 00:21:05.840 Memory Page Size Minimum: 4096 bytes 00:21:05.840 Memory Page Size Maximum: 4096 bytes 00:21:05.840 Persistent Memory Region: Not Supported 00:21:05.840 Optional Asynchronous Events Supported 00:21:05.840 Namespace Attribute Notices: Not Supported 00:21:05.840 Firmware Activation Notices: Not Supported 00:21:05.840 ANA Change Notices: Not Supported 00:21:05.840 PLE Aggregate Log Change Notices: Not Supported 00:21:05.840 LBA Status Info Alert Notices: Not Supported 00:21:05.840 EGE Aggregate Log Change Notices: Not Supported 00:21:05.840 Normal NVM Subsystem Shutdown event: Not Supported 00:21:05.840 Zone Descriptor Change Notices: Not Supported 00:21:05.840 Discovery Log Change Notices: Supported 00:21:05.840 Controller Attributes 00:21:05.840 128-bit Host Identifier: Not Supported 00:21:05.840 Non-Operational Permissive Mode: Not Supported 00:21:05.840 NVM Sets: Not Supported 00:21:05.840 Read Recovery Levels: Not Supported 00:21:05.840 Endurance Groups: Not Supported 00:21:05.840 Predictable Latency Mode: Not Supported 00:21:05.840 Traffic Based Keep ALive: Not Supported 00:21:05.840 Namespace Granularity: Not Supported 00:21:05.840 SQ Associations: Not Supported 00:21:05.840 UUID List: Not Supported 00:21:05.840 Multi-Domain Subsystem: Not Supported 00:21:05.840 Fixed Capacity Management: Not Supported 00:21:05.840 Variable Capacity Management: Not Supported 00:21:05.840 Delete Endurance Group: Not Supported 00:21:05.840 Delete NVM Set: Not Supported 00:21:05.840 Extended LBA Formats Supported: Not Supported 00:21:05.840 Flexible Data Placement Supported: Not Supported 00:21:05.840 00:21:05.840 Controller Memory Buffer Support 00:21:05.840 ================================ 00:21:05.840 Supported: No 00:21:05.840 00:21:05.840 Persistent Memory Region Support 00:21:05.840 ================================ 00:21:05.840 Supported: No 00:21:05.840 00:21:05.840 Admin Command Set Attributes 00:21:05.841 ============================ 00:21:05.841 Security Send/Receive: Not Supported 00:21:05.841 Format NVM: Not Supported 00:21:05.841 Firmware Activate/Download: Not Supported 00:21:05.841 Namespace Management: Not Supported 00:21:05.841 Device Self-Test: Not Supported 00:21:05.841 Directives: Not Supported 00:21:05.841 NVMe-MI: Not Supported 00:21:05.841 Virtualization Management: Not Supported 00:21:05.841 Doorbell Buffer Config: Not Supported 00:21:05.841 Get LBA Status Capability: Not Supported 00:21:05.841 Command & Feature Lockdown Capability: Not Supported 00:21:05.841 Abort Command Limit: 1 00:21:05.841 Async Event Request Limit: 1 00:21:05.841 Number of Firmware Slots: N/A 00:21:05.841 Firmware Slot 1 Read-Only: N/A 00:21:05.841 Firmware Activation Without Reset: N/A 00:21:05.841 Multiple Update Detection Support: N/A 00:21:05.841 Firmware Update Granularity: No Information Provided 00:21:05.841 Per-Namespace SMART Log: No 00:21:05.841 Asymmetric Namespace Access Log Page: Not Supported 00:21:05.841 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:05.841 Command Effects Log Page: Not Supported 00:21:05.841 Get Log Page Extended Data: Supported 00:21:05.841 Telemetry Log Pages: Not Supported 00:21:05.841 Persistent Event Log Pages: Not Supported 00:21:05.841 Supported Log Pages Log Page: May Support 00:21:05.841 Commands Supported & Effects Log Page: Not Supported 00:21:05.841 Feature Identifiers & Effects Log Page:May Support 00:21:05.841 NVMe-MI Commands & Effects Log Page: May Support 00:21:05.841 Data Area 4 for Telemetry Log: Not Supported 00:21:05.841 Error Log Page Entries Supported: 1 00:21:05.841 Keep Alive: Not Supported 00:21:05.841 00:21:05.841 NVM Command Set Attributes 00:21:05.841 ========================== 00:21:05.841 Submission Queue Entry Size 00:21:05.841 Max: 1 00:21:05.841 Min: 1 00:21:05.841 Completion Queue Entry Size 00:21:05.841 Max: 1 00:21:05.841 Min: 1 00:21:05.841 Number of Namespaces: 0 00:21:05.841 Compare Command: Not Supported 00:21:05.841 Write Uncorrectable Command: Not Supported 00:21:05.841 Dataset Management Command: Not Supported 00:21:05.841 Write Zeroes Command: Not Supported 00:21:05.841 Set Features Save Field: Not Supported 00:21:05.841 Reservations: Not Supported 00:21:05.841 Timestamp: Not Supported 00:21:05.841 Copy: Not Supported 00:21:05.841 Volatile Write Cache: Not Present 00:21:05.841 Atomic Write Unit (Normal): 1 00:21:05.841 Atomic Write Unit (PFail): 1 00:21:05.841 Atomic Compare & Write Unit: 1 00:21:05.841 Fused Compare & Write: Not Supported 00:21:05.841 Scatter-Gather List 00:21:05.841 SGL Command Set: Supported 00:21:05.841 SGL Keyed: Not Supported 00:21:05.841 SGL Bit Bucket Descriptor: Not Supported 00:21:05.841 SGL Metadata Pointer: Not Supported 00:21:05.841 Oversized SGL: Not Supported 00:21:05.841 SGL Metadata Address: Not Supported 00:21:05.841 SGL Offset: Supported 00:21:05.841 Transport SGL Data Block: Not Supported 00:21:05.841 Replay Protected Memory Block: Not Supported 00:21:05.841 00:21:05.841 Firmware Slot Information 00:21:05.841 ========================= 00:21:05.841 Active slot: 0 00:21:05.841 00:21:05.841 00:21:05.841 Error Log 00:21:05.841 ========= 00:21:05.841 00:21:05.841 Active Namespaces 00:21:05.841 ================= 00:21:05.841 Discovery Log Page 00:21:05.841 ================== 00:21:05.841 Generation Counter: 2 00:21:05.841 Number of Records: 2 00:21:05.841 Record Format: 0 00:21:05.841 00:21:05.841 Discovery Log Entry 0 00:21:05.841 ---------------------- 00:21:05.841 Transport Type: 3 (TCP) 00:21:05.841 Address Family: 1 (IPv4) 00:21:05.841 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:05.841 Entry Flags: 00:21:05.841 Duplicate Returned Information: 0 00:21:05.841 Explicit Persistent Connection Support for Discovery: 0 00:21:05.841 Transport Requirements: 00:21:05.841 Secure Channel: Not Specified 00:21:05.841 Port ID: 1 (0x0001) 00:21:05.841 Controller ID: 65535 (0xffff) 00:21:05.841 Admin Max SQ Size: 32 00:21:05.841 Transport Service Identifier: 4420 00:21:05.841 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:05.841 Transport Address: 10.0.0.1 00:21:05.841 Discovery Log Entry 1 00:21:05.841 ---------------------- 00:21:05.841 Transport Type: 3 (TCP) 00:21:05.841 Address Family: 1 (IPv4) 00:21:05.841 Subsystem Type: 2 (NVM Subsystem) 00:21:05.841 Entry Flags: 00:21:05.841 Duplicate Returned Information: 0 00:21:05.841 Explicit Persistent Connection Support for Discovery: 0 00:21:05.841 Transport Requirements: 00:21:05.841 Secure Channel: Not Specified 00:21:05.841 Port ID: 1 (0x0001) 00:21:05.841 Controller ID: 65535 (0xffff) 00:21:05.841 Admin Max SQ Size: 32 00:21:05.841 Transport Service Identifier: 4420 00:21:05.841 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:05.841 Transport Address: 10.0.0.1 00:21:05.841 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:06.101 get_feature(0x01) failed 00:21:06.101 get_feature(0x02) failed 00:21:06.101 get_feature(0x04) failed 00:21:06.101 ===================================================== 00:21:06.101 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:06.101 ===================================================== 00:21:06.101 Controller Capabilities/Features 00:21:06.101 ================================ 00:21:06.101 Vendor ID: 0000 00:21:06.101 Subsystem Vendor ID: 0000 00:21:06.101 Serial Number: f646bddd23306cdfe411 00:21:06.101 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:06.101 Firmware Version: 6.8.9-20 00:21:06.101 Recommended Arb Burst: 6 00:21:06.101 IEEE OUI Identifier: 00 00 00 00:21:06.101 Multi-path I/O 00:21:06.101 May have multiple subsystem ports: Yes 00:21:06.101 May have multiple controllers: Yes 00:21:06.101 Associated with SR-IOV VF: No 00:21:06.101 Max Data Transfer Size: Unlimited 00:21:06.101 Max Number of Namespaces: 1024 00:21:06.101 Max Number of I/O Queues: 128 00:21:06.101 NVMe Specification Version (VS): 1.3 00:21:06.101 NVMe Specification Version (Identify): 1.3 00:21:06.101 Maximum Queue Entries: 1024 00:21:06.101 Contiguous Queues Required: No 00:21:06.101 Arbitration Mechanisms Supported 00:21:06.101 Weighted Round Robin: Not Supported 00:21:06.101 Vendor Specific: Not Supported 00:21:06.101 Reset Timeout: 7500 ms 00:21:06.101 Doorbell Stride: 4 bytes 00:21:06.101 NVM Subsystem Reset: Not Supported 00:21:06.101 Command Sets Supported 00:21:06.101 NVM Command Set: Supported 00:21:06.101 Boot Partition: Not Supported 00:21:06.101 Memory Page Size Minimum: 4096 bytes 00:21:06.101 Memory Page Size Maximum: 4096 bytes 00:21:06.101 Persistent Memory Region: Not Supported 00:21:06.101 Optional Asynchronous Events Supported 00:21:06.101 Namespace Attribute Notices: Supported 00:21:06.101 Firmware Activation Notices: Not Supported 00:21:06.101 ANA Change Notices: Supported 00:21:06.101 PLE Aggregate Log Change Notices: Not Supported 00:21:06.101 LBA Status Info Alert Notices: Not Supported 00:21:06.101 EGE Aggregate Log Change Notices: Not Supported 00:21:06.101 Normal NVM Subsystem Shutdown event: Not Supported 00:21:06.101 Zone Descriptor Change Notices: Not Supported 00:21:06.101 Discovery Log Change Notices: Not Supported 00:21:06.101 Controller Attributes 00:21:06.101 128-bit Host Identifier: Supported 00:21:06.101 Non-Operational Permissive Mode: Not Supported 00:21:06.101 NVM Sets: Not Supported 00:21:06.101 Read Recovery Levels: Not Supported 00:21:06.101 Endurance Groups: Not Supported 00:21:06.101 Predictable Latency Mode: Not Supported 00:21:06.101 Traffic Based Keep ALive: Supported 00:21:06.101 Namespace Granularity: Not Supported 00:21:06.101 SQ Associations: Not Supported 00:21:06.101 UUID List: Not Supported 00:21:06.101 Multi-Domain Subsystem: Not Supported 00:21:06.101 Fixed Capacity Management: Not Supported 00:21:06.101 Variable Capacity Management: Not Supported 00:21:06.101 Delete Endurance Group: Not Supported 00:21:06.101 Delete NVM Set: Not Supported 00:21:06.101 Extended LBA Formats Supported: Not Supported 00:21:06.101 Flexible Data Placement Supported: Not Supported 00:21:06.101 00:21:06.101 Controller Memory Buffer Support 00:21:06.101 ================================ 00:21:06.101 Supported: No 00:21:06.101 00:21:06.101 Persistent Memory Region Support 00:21:06.101 ================================ 00:21:06.101 Supported: No 00:21:06.101 00:21:06.101 Admin Command Set Attributes 00:21:06.101 ============================ 00:21:06.101 Security Send/Receive: Not Supported 00:21:06.101 Format NVM: Not Supported 00:21:06.101 Firmware Activate/Download: Not Supported 00:21:06.101 Namespace Management: Not Supported 00:21:06.101 Device Self-Test: Not Supported 00:21:06.101 Directives: Not Supported 00:21:06.101 NVMe-MI: Not Supported 00:21:06.101 Virtualization Management: Not Supported 00:21:06.101 Doorbell Buffer Config: Not Supported 00:21:06.101 Get LBA Status Capability: Not Supported 00:21:06.101 Command & Feature Lockdown Capability: Not Supported 00:21:06.101 Abort Command Limit: 4 00:21:06.101 Async Event Request Limit: 4 00:21:06.101 Number of Firmware Slots: N/A 00:21:06.101 Firmware Slot 1 Read-Only: N/A 00:21:06.101 Firmware Activation Without Reset: N/A 00:21:06.101 Multiple Update Detection Support: N/A 00:21:06.101 Firmware Update Granularity: No Information Provided 00:21:06.101 Per-Namespace SMART Log: Yes 00:21:06.101 Asymmetric Namespace Access Log Page: Supported 00:21:06.101 ANA Transition Time : 10 sec 00:21:06.101 00:21:06.101 Asymmetric Namespace Access Capabilities 00:21:06.101 ANA Optimized State : Supported 00:21:06.101 ANA Non-Optimized State : Supported 00:21:06.101 ANA Inaccessible State : Supported 00:21:06.101 ANA Persistent Loss State : Supported 00:21:06.101 ANA Change State : Supported 00:21:06.101 ANAGRPID is not changed : No 00:21:06.101 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:06.101 00:21:06.101 ANA Group Identifier Maximum : 128 00:21:06.101 Number of ANA Group Identifiers : 128 00:21:06.101 Max Number of Allowed Namespaces : 1024 00:21:06.101 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:06.101 Command Effects Log Page: Supported 00:21:06.101 Get Log Page Extended Data: Supported 00:21:06.101 Telemetry Log Pages: Not Supported 00:21:06.101 Persistent Event Log Pages: Not Supported 00:21:06.101 Supported Log Pages Log Page: May Support 00:21:06.101 Commands Supported & Effects Log Page: Not Supported 00:21:06.101 Feature Identifiers & Effects Log Page:May Support 00:21:06.101 NVMe-MI Commands & Effects Log Page: May Support 00:21:06.101 Data Area 4 for Telemetry Log: Not Supported 00:21:06.101 Error Log Page Entries Supported: 128 00:21:06.101 Keep Alive: Supported 00:21:06.101 Keep Alive Granularity: 1000 ms 00:21:06.101 00:21:06.101 NVM Command Set Attributes 00:21:06.101 ========================== 00:21:06.101 Submission Queue Entry Size 00:21:06.101 Max: 64 00:21:06.101 Min: 64 00:21:06.101 Completion Queue Entry Size 00:21:06.101 Max: 16 00:21:06.101 Min: 16 00:21:06.101 Number of Namespaces: 1024 00:21:06.102 Compare Command: Not Supported 00:21:06.102 Write Uncorrectable Command: Not Supported 00:21:06.102 Dataset Management Command: Supported 00:21:06.102 Write Zeroes Command: Supported 00:21:06.102 Set Features Save Field: Not Supported 00:21:06.102 Reservations: Not Supported 00:21:06.102 Timestamp: Not Supported 00:21:06.102 Copy: Not Supported 00:21:06.102 Volatile Write Cache: Present 00:21:06.102 Atomic Write Unit (Normal): 1 00:21:06.102 Atomic Write Unit (PFail): 1 00:21:06.102 Atomic Compare & Write Unit: 1 00:21:06.102 Fused Compare & Write: Not Supported 00:21:06.102 Scatter-Gather List 00:21:06.102 SGL Command Set: Supported 00:21:06.102 SGL Keyed: Not Supported 00:21:06.102 SGL Bit Bucket Descriptor: Not Supported 00:21:06.102 SGL Metadata Pointer: Not Supported 00:21:06.102 Oversized SGL: Not Supported 00:21:06.102 SGL Metadata Address: Not Supported 00:21:06.102 SGL Offset: Supported 00:21:06.102 Transport SGL Data Block: Not Supported 00:21:06.102 Replay Protected Memory Block: Not Supported 00:21:06.102 00:21:06.102 Firmware Slot Information 00:21:06.102 ========================= 00:21:06.102 Active slot: 0 00:21:06.102 00:21:06.102 Asymmetric Namespace Access 00:21:06.102 =========================== 00:21:06.102 Change Count : 0 00:21:06.102 Number of ANA Group Descriptors : 1 00:21:06.102 ANA Group Descriptor : 0 00:21:06.102 ANA Group ID : 1 00:21:06.102 Number of NSID Values : 1 00:21:06.102 Change Count : 0 00:21:06.102 ANA State : 1 00:21:06.102 Namespace Identifier : 1 00:21:06.102 00:21:06.102 Commands Supported and Effects 00:21:06.102 ============================== 00:21:06.102 Admin Commands 00:21:06.102 -------------- 00:21:06.102 Get Log Page (02h): Supported 00:21:06.102 Identify (06h): Supported 00:21:06.102 Abort (08h): Supported 00:21:06.102 Set Features (09h): Supported 00:21:06.102 Get Features (0Ah): Supported 00:21:06.102 Asynchronous Event Request (0Ch): Supported 00:21:06.102 Keep Alive (18h): Supported 00:21:06.102 I/O Commands 00:21:06.102 ------------ 00:21:06.102 Flush (00h): Supported 00:21:06.102 Write (01h): Supported LBA-Change 00:21:06.102 Read (02h): Supported 00:21:06.102 Write Zeroes (08h): Supported LBA-Change 00:21:06.102 Dataset Management (09h): Supported 00:21:06.102 00:21:06.102 Error Log 00:21:06.102 ========= 00:21:06.102 Entry: 0 00:21:06.102 Error Count: 0x3 00:21:06.102 Submission Queue Id: 0x0 00:21:06.102 Command Id: 0x5 00:21:06.102 Phase Bit: 0 00:21:06.102 Status Code: 0x2 00:21:06.102 Status Code Type: 0x0 00:21:06.102 Do Not Retry: 1 00:21:06.102 Error Location: 0x28 00:21:06.102 LBA: 0x0 00:21:06.102 Namespace: 0x0 00:21:06.102 Vendor Log Page: 0x0 00:21:06.102 ----------- 00:21:06.102 Entry: 1 00:21:06.102 Error Count: 0x2 00:21:06.102 Submission Queue Id: 0x0 00:21:06.102 Command Id: 0x5 00:21:06.102 Phase Bit: 0 00:21:06.102 Status Code: 0x2 00:21:06.102 Status Code Type: 0x0 00:21:06.102 Do Not Retry: 1 00:21:06.102 Error Location: 0x28 00:21:06.102 LBA: 0x0 00:21:06.102 Namespace: 0x0 00:21:06.102 Vendor Log Page: 0x0 00:21:06.102 ----------- 00:21:06.102 Entry: 2 00:21:06.102 Error Count: 0x1 00:21:06.102 Submission Queue Id: 0x0 00:21:06.102 Command Id: 0x4 00:21:06.102 Phase Bit: 0 00:21:06.102 Status Code: 0x2 00:21:06.102 Status Code Type: 0x0 00:21:06.102 Do Not Retry: 1 00:21:06.102 Error Location: 0x28 00:21:06.102 LBA: 0x0 00:21:06.102 Namespace: 0x0 00:21:06.102 Vendor Log Page: 0x0 00:21:06.102 00:21:06.102 Number of Queues 00:21:06.102 ================ 00:21:06.102 Number of I/O Submission Queues: 128 00:21:06.102 Number of I/O Completion Queues: 128 00:21:06.102 00:21:06.102 ZNS Specific Controller Data 00:21:06.102 ============================ 00:21:06.102 Zone Append Size Limit: 0 00:21:06.102 00:21:06.102 00:21:06.102 Active Namespaces 00:21:06.102 ================= 00:21:06.102 get_feature(0x05) failed 00:21:06.102 Namespace ID:1 00:21:06.102 Command Set Identifier: NVM (00h) 00:21:06.102 Deallocate: Supported 00:21:06.102 Deallocated/Unwritten Error: Not Supported 00:21:06.102 Deallocated Read Value: Unknown 00:21:06.102 Deallocate in Write Zeroes: Not Supported 00:21:06.102 Deallocated Guard Field: 0xFFFF 00:21:06.102 Flush: Supported 00:21:06.102 Reservation: Not Supported 00:21:06.102 Namespace Sharing Capabilities: Multiple Controllers 00:21:06.102 Size (in LBAs): 1310720 (5GiB) 00:21:06.102 Capacity (in LBAs): 1310720 (5GiB) 00:21:06.102 Utilization (in LBAs): 1310720 (5GiB) 00:21:06.102 UUID: 6ac570f9-64db-4e59-8b72-df3ec3d5571e 00:21:06.102 Thin Provisioning: Not Supported 00:21:06.102 Per-NS Atomic Units: Yes 00:21:06.102 Atomic Boundary Size (Normal): 0 00:21:06.102 Atomic Boundary Size (PFail): 0 00:21:06.102 Atomic Boundary Offset: 0 00:21:06.102 NGUID/EUI64 Never Reused: No 00:21:06.102 ANA group ID: 1 00:21:06.102 Namespace Write Protected: No 00:21:06.102 Number of LBA Formats: 1 00:21:06.102 Current LBA Format: LBA Format #00 00:21:06.102 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:21:06.102 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:06.102 rmmod nvme_tcp 00:21:06.102 rmmod nvme_fabrics 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:06.102 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:06.362 02:25:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:21:06.362 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:21:06.621 02:25:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:07.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:07.188 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:07.448 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:07.448 00:21:07.448 real 0m3.232s 00:21:07.448 user 0m1.191s 00:21:07.448 sys 0m1.440s 00:21:07.448 ************************************ 00:21:07.448 END TEST nvmf_identify_kernel_target 00:21:07.448 ************************************ 00:21:07.448 02:25:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:07.448 02:25:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.448 02:25:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:07.448 02:25:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:07.448 02:25:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:07.448 02:25:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.448 ************************************ 00:21:07.448 START TEST nvmf_auth_host 00:21:07.448 ************************************ 00:21:07.448 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:07.448 * Looking for test storage... 00:21:07.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:07.448 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:07.448 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:21:07.448 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:07.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.707 --rc genhtml_branch_coverage=1 00:21:07.707 --rc genhtml_function_coverage=1 00:21:07.707 --rc genhtml_legend=1 00:21:07.707 --rc geninfo_all_blocks=1 00:21:07.707 --rc geninfo_unexecuted_blocks=1 00:21:07.707 00:21:07.707 ' 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:07.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.707 --rc genhtml_branch_coverage=1 00:21:07.707 --rc genhtml_function_coverage=1 00:21:07.707 --rc genhtml_legend=1 00:21:07.707 --rc geninfo_all_blocks=1 00:21:07.707 --rc geninfo_unexecuted_blocks=1 00:21:07.707 00:21:07.707 ' 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:07.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.707 --rc genhtml_branch_coverage=1 00:21:07.707 --rc genhtml_function_coverage=1 00:21:07.707 --rc genhtml_legend=1 00:21:07.707 --rc geninfo_all_blocks=1 00:21:07.707 --rc geninfo_unexecuted_blocks=1 00:21:07.707 00:21:07.707 ' 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:07.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.707 --rc genhtml_branch_coverage=1 00:21:07.707 --rc genhtml_function_coverage=1 00:21:07.707 --rc genhtml_legend=1 00:21:07.707 --rc geninfo_all_blocks=1 00:21:07.707 --rc geninfo_unexecuted_blocks=1 00:21:07.707 00:21:07.707 ' 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.707 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:07.708 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:07.708 Cannot find device "nvmf_init_br" 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:07.708 Cannot find device "nvmf_init_br2" 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:07.708 Cannot find device "nvmf_tgt_br" 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:07.708 Cannot find device "nvmf_tgt_br2" 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:07.708 Cannot find device "nvmf_init_br" 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:07.708 Cannot find device "nvmf_init_br2" 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:07.708 Cannot find device "nvmf_tgt_br" 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:07.708 Cannot find device "nvmf_tgt_br2" 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:07.708 Cannot find device "nvmf_br" 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:07.708 Cannot find device "nvmf_init_if" 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:07.708 Cannot find device "nvmf_init_if2" 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:07.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:07.708 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:07.708 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:07.966 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:07.966 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:21:07.966 00:21:07.966 --- 10.0.0.3 ping statistics --- 00:21:07.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.966 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:07.966 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:07.966 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:21:07.966 00:21:07.966 --- 10.0.0.4 ping statistics --- 00:21:07.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.966 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:07.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:21:07.966 00:21:07.966 --- 10.0.0.1 ping statistics --- 00:21:07.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.966 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:07.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:21:07.966 00:21:07.966 --- 10.0.0.2 ping statistics --- 00:21:07.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.966 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # return 0 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:07.966 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:21:08.225 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:08.225 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:08.225 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.225 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=93621 00:21:08.225 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:08.225 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 93621 00:21:08.225 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 93621 ']' 00:21:08.225 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.225 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:08.225 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.225 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:08.225 02:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=da8d49e6cf4ca8df86a2f06d89c95d80 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.wFS 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key da8d49e6cf4ca8df86a2f06d89c95d80 0 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 da8d49e6cf4ca8df86a2f06d89c95d80 0 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=da8d49e6cf4ca8df86a2f06d89c95d80 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.wFS 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.wFS 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.wFS 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=5ebb883a597c9643bc980a8cdc14b0aca05f68cd9ecbc90a378ac827081a667f 00:21:08.483 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:21:08.484 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.Yjw 00:21:08.484 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 5ebb883a597c9643bc980a8cdc14b0aca05f68cd9ecbc90a378ac827081a667f 3 00:21:08.484 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 5ebb883a597c9643bc980a8cdc14b0aca05f68cd9ecbc90a378ac827081a667f 3 00:21:08.484 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:08.484 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:08.484 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=5ebb883a597c9643bc980a8cdc14b0aca05f68cd9ecbc90a378ac827081a667f 00:21:08.484 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:21:08.484 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.Yjw 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.Yjw 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Yjw 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=566cf17516ca428feb4ef3500b2a1a4da0bd31bce5ad72f3 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.TKy 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 566cf17516ca428feb4ef3500b2a1a4da0bd31bce5ad72f3 0 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 566cf17516ca428feb4ef3500b2a1a4da0bd31bce5ad72f3 0 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=566cf17516ca428feb4ef3500b2a1a4da0bd31bce5ad72f3 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.TKy 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.TKy 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.TKy 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=60e1ce77b8c7bf2bdd933d438d361a1201c1bf656c8e19b1 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.Dte 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 60e1ce77b8c7bf2bdd933d438d361a1201c1bf656c8e19b1 2 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 60e1ce77b8c7bf2bdd933d438d361a1201c1bf656c8e19b1 2 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=60e1ce77b8c7bf2bdd933d438d361a1201c1bf656c8e19b1 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.Dte 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.Dte 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Dte 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=1ab0634a534302da52cf62c66a33f5eb 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.GVK 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 1ab0634a534302da52cf62c66a33f5eb 1 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 1ab0634a534302da52cf62c66a33f5eb 1 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=1ab0634a534302da52cf62c66a33f5eb 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.GVK 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.GVK 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.GVK 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=254d0e5cbe6637a31fb275044d8dc749 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.sWx 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 254d0e5cbe6637a31fb275044d8dc749 1 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 254d0e5cbe6637a31fb275044d8dc749 1 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=254d0e5cbe6637a31fb275044d8dc749 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:21:08.743 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.sWx 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.sWx 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.sWx 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7b54edb2977bcf19888b70b6ba50f91a5e158bbf55f079aa 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.59K 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7b54edb2977bcf19888b70b6ba50f91a5e158bbf55f079aa 2 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7b54edb2977bcf19888b70b6ba50f91a5e158bbf55f079aa 2 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:09.002 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7b54edb2977bcf19888b70b6ba50f91a5e158bbf55f079aa 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.59K 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.59K 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.59K 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=cc8dffb5c4b3c9d871163414a52389c0 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.Ldw 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key cc8dffb5c4b3c9d871163414a52389c0 0 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 cc8dffb5c4b3c9d871163414a52389c0 0 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=cc8dffb5c4b3c9d871163414a52389c0 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.Ldw 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.Ldw 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Ldw 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=20edf4bd8e54b5eb80038c44c913169399d6d40aebd84d4674d20b358e4b909e 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.BXb 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 20edf4bd8e54b5eb80038c44c913169399d6d40aebd84d4674d20b358e4b909e 3 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 20edf4bd8e54b5eb80038c44c913169399d6d40aebd84d4674d20b358e4b909e 3 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=20edf4bd8e54b5eb80038c44c913169399d6d40aebd84d4674d20b358e4b909e 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.BXb 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.BXb 00:21:09.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.BXb 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 93621 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 93621 ']' 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:09.003 02:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wFS 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Yjw ]] 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Yjw 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.TKy 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.262 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Dte ]] 00:21:09.263 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Dte 00:21:09.263 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.263 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.263 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.263 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:09.263 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.GVK 00:21:09.263 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.263 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.sWx ]] 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sWx 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.59K 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Ldw ]] 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Ldw 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.BXb 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:09.522 02:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:09.781 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:09.781 Waiting for block devices as requested 00:21:09.781 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:10.040 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:10.607 No valid GPT data, bailing 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:10.607 No valid GPT data, bailing 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:10.607 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:10.866 No valid GPT data, bailing 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:10.866 No valid GPT data, bailing 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -a 10.0.0.1 -t tcp -s 4420 00:21:10.866 00:21:10.866 Discovery Log Number of Records 2, Generation counter 2 00:21:10.866 =====Discovery Log Entry 0====== 00:21:10.866 trtype: tcp 00:21:10.866 adrfam: ipv4 00:21:10.866 subtype: current discovery subsystem 00:21:10.866 treq: not specified, sq flow control disable supported 00:21:10.866 portid: 1 00:21:10.866 trsvcid: 4420 00:21:10.866 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:10.866 traddr: 10.0.0.1 00:21:10.866 eflags: none 00:21:10.866 sectype: none 00:21:10.866 =====Discovery Log Entry 1====== 00:21:10.866 trtype: tcp 00:21:10.866 adrfam: ipv4 00:21:10.866 subtype: nvme subsystem 00:21:10.866 treq: not specified, sq flow control disable supported 00:21:10.866 portid: 1 00:21:10.866 trsvcid: 4420 00:21:10.866 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:10.866 traddr: 10.0.0.1 00:21:10.866 eflags: none 00:21:10.866 sectype: none 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:10.866 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 nvme0n1 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.126 02:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.387 nvme0n1 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:11.387 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.388 nvme0n1 00:21:11.388 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:11.664 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.665 nvme0n1 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.665 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.949 nvme0n1 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.949 nvme0n1 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.949 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:12.229 02:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.489 nvme0n1 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.489 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.748 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.748 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.748 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.748 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.748 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.748 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.748 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:21:12.748 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.749 nvme0n1 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.749 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.008 nvme0n1 00:21:13.008 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.008 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.008 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.008 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.008 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.009 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.268 nvme0n1 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.268 02:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.269 nvme0n1 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.269 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:13.528 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.096 nvme0n1 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.096 02:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.356 nvme0n1 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.356 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.615 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.615 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.615 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.615 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.615 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.615 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.616 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.875 nvme0n1 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:14.875 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.876 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.135 nvme0n1 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.135 02:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.394 nvme0n1 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.394 02:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.772 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.031 nvme0n1 00:21:17.031 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.031 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.031 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.031 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.031 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.031 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.031 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.031 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.031 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.031 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.290 02:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.549 nvme0n1 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:17.549 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.550 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.809 nvme0n1 00:21:17.809 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.809 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.809 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.809 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.809 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.809 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:18.067 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:18.068 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.068 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.068 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:18.068 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.068 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:18.068 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:18.068 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:18.068 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:18.068 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.068 02:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.327 nvme0n1 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.327 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.586 nvme0n1 00:21:18.586 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.586 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.586 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.586 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.586 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.586 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.846 02:25:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.415 nvme0n1 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.415 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.984 nvme0n1 00:21:19.984 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.984 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.984 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.984 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.984 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.984 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.984 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.984 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.985 02:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.555 nvme0n1 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.555 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.124 nvme0n1 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.124 02:25:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.692 nvme0n1 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:21.692 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.693 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.952 nvme0n1 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:21.952 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.953 nvme0n1 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.953 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.213 nvme0n1 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.213 02:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.213 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.473 nvme0n1 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.473 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.733 nvme0n1 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.733 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.734 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.734 nvme0n1 00:21:22.734 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.734 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.734 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.734 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.734 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.734 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:22.993 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.994 nvme0n1 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.994 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.253 nvme0n1 00:21:23.253 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.253 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.253 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.253 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.253 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.253 02:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:23.253 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.254 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.513 nvme0n1 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.513 nvme0n1 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.513 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.514 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.514 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.773 nvme0n1 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.773 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.033 nvme0n1 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.033 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.293 02:25:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.293 nvme0n1 00:21:24.293 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.293 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.293 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.293 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.293 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.293 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.553 nvme0n1 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.553 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:24.812 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.813 nvme0n1 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.813 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.072 02:25:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.331 nvme0n1 00:21:25.331 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.331 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.331 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:25.331 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.332 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.591 nvme0n1 00:21:25.591 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.591 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.591 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:25.591 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.591 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.591 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.850 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.109 nvme0n1 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:26.109 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.110 02:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.369 nvme0n1 00:21:26.369 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.369 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.369 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.369 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.369 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.369 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.628 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.887 nvme0n1 00:21:26.887 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.887 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.887 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.888 02:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.457 nvme0n1 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.457 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.024 nvme0n1 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.024 02:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.590 nvme0n1 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.590 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.591 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.158 nvme0n1 00:21:29.158 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.158 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.158 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.158 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.158 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.158 02:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.158 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.158 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.158 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.158 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.417 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.986 nvme0n1 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.986 nvme0n1 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.986 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.245 nvme0n1 00:21:30.246 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.246 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.246 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.246 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.246 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.246 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.246 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.246 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.246 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.246 02:25:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.246 nvme0n1 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.246 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.505 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.506 nvme0n1 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.506 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.766 nvme0n1 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.766 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.025 nvme0n1 00:21:31.025 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.025 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.025 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.025 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.025 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.025 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.025 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.026 nvme0n1 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.026 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.286 02:25:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.286 nvme0n1 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.286 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.546 nvme0n1 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.546 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.805 nvme0n1 00:21:31.805 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.805 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.805 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.806 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.065 nvme0n1 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.065 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.066 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:32.066 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:32.066 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:32.066 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:32.066 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:32.066 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:32.066 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.066 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.326 nvme0n1 00:21:32.326 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.326 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.326 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:32.326 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.326 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.326 02:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.326 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.586 nvme0n1 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.586 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.845 nvme0n1 00:21:32.845 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.845 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.846 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.105 nvme0n1 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:33.105 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.106 02:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.363 nvme0n1 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:33.363 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.364 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.931 nvme0n1 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:33.931 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:33.932 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.932 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.932 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.191 nvme0n1 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:34.191 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.192 02:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.450 nvme0n1 00:21:34.450 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.450 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.450 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.451 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.451 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.451 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.710 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.969 nvme0n1 00:21:34.969 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.969 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.969 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.969 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.969 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.969 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.969 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.969 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.969 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.969 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.969 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.969 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.969 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGE4ZDQ5ZTZjZjRjYThkZjg2YTJmMDZkODljOTVkODA+dQk/: 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: ]] 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWViYjg4M2E1OTdjOTY0M2JjOTgwYThjZGMxNGIwYWNhMDVmNjhjZDllY2JjOTBhMzc4YWM4MjcwODFhNjY3ZmwB2+w=: 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.970 02:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.538 nvme0n1 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.538 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.106 nvme0n1 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:36.106 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.107 02:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.708 nvme0n1 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2I1NGVkYjI5NzdiY2YxOTg4OGI3MGI2YmE1MGY5MWE1ZTE1OGJiZjU1ZjA3OWFh6e6NIA==: 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: ]] 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M4ZGZmYjVjNGIzYzlkODcxMTYzNDE0YTUyMzg5YzCuOPYG: 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.708 02:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.291 nvme0n1 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjBlZGY0YmQ4ZTU0YjVlYjgwMDM4YzQ0YzkxMzE2OTM5OWQ2ZDQwYWViZDg0ZDQ2NzRkMjBiMzU4ZTRiOTA5ZfMjHe4=: 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:37.291 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:37.292 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:37.292 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:37.292 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:37.292 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.292 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.859 nvme0n1 00:21:37.859 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.859 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.859 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.859 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:37.859 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.859 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.859 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.859 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:37.860 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.119 request: 00:21:38.119 { 00:21:38.119 "name": "nvme0", 00:21:38.119 "trtype": "tcp", 00:21:38.119 "traddr": "10.0.0.1", 00:21:38.119 "adrfam": "ipv4", 00:21:38.119 "trsvcid": "4420", 00:21:38.119 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:38.119 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:38.119 "prchk_reftag": false, 00:21:38.119 "prchk_guard": false, 00:21:38.119 "hdgst": false, 00:21:38.119 "ddgst": false, 00:21:38.119 "allow_unrecognized_csi": false, 00:21:38.119 "method": "bdev_nvme_attach_controller", 00:21:38.119 "req_id": 1 00:21:38.119 } 00:21:38.119 Got JSON-RPC error response 00:21:38.119 response: 00:21:38.119 { 00:21:38.119 "code": -5, 00:21:38.119 "message": "Input/output error" 00:21:38.119 } 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:38.119 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.120 request: 00:21:38.120 { 00:21:38.120 "name": "nvme0", 00:21:38.120 "trtype": "tcp", 00:21:38.120 "traddr": "10.0.0.1", 00:21:38.120 "adrfam": "ipv4", 00:21:38.120 "trsvcid": "4420", 00:21:38.120 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:38.120 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:38.120 "prchk_reftag": false, 00:21:38.120 "prchk_guard": false, 00:21:38.120 "hdgst": false, 00:21:38.120 "ddgst": false, 00:21:38.120 "dhchap_key": "key2", 00:21:38.120 "allow_unrecognized_csi": false, 00:21:38.120 "method": "bdev_nvme_attach_controller", 00:21:38.120 "req_id": 1 00:21:38.120 } 00:21:38.120 Got JSON-RPC error response 00:21:38.120 response: 00:21:38.120 { 00:21:38.120 "code": -5, 00:21:38.120 "message": "Input/output error" 00:21:38.120 } 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.120 request: 00:21:38.120 { 00:21:38.120 "name": "nvme0", 00:21:38.120 "trtype": "tcp", 00:21:38.120 "traddr": "10.0.0.1", 00:21:38.120 "adrfam": "ipv4", 00:21:38.120 "trsvcid": "4420", 00:21:38.120 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:38.120 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:38.120 "prchk_reftag": false, 00:21:38.120 "prchk_guard": false, 00:21:38.120 "hdgst": false, 00:21:38.120 "ddgst": false, 00:21:38.120 "dhchap_key": "key1", 00:21:38.120 "dhchap_ctrlr_key": "ckey2", 00:21:38.120 "allow_unrecognized_csi": false, 00:21:38.120 "method": "bdev_nvme_attach_controller", 00:21:38.120 "req_id": 1 00:21:38.120 } 00:21:38.120 Got JSON-RPC error response 00:21:38.120 response: 00:21:38.120 { 00:21:38.120 "code": -5, 00:21:38.120 "message": "Input/output error" 00:21:38.120 } 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.120 02:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.379 nvme0n1 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:38.379 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.380 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.380 request: 00:21:38.380 { 00:21:38.380 "name": "nvme0", 00:21:38.380 "dhchap_key": "key1", 00:21:38.380 "dhchap_ctrlr_key": "ckey2", 00:21:38.380 "method": "bdev_nvme_set_keys", 00:21:38.380 "req_id": 1 00:21:38.380 } 00:21:38.380 Got JSON-RPC error response 00:21:38.380 response: 00:21:38.380 { 00:21:38.380 "code": -5, 00:21:38.380 "message": "Input/output error" 00:21:38.380 } 00:21:38.380 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:38.380 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:38.380 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:38.380 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:38.380 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:38.380 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:38.380 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.380 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.380 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.380 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.380 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:38.380 02:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTY2Y2YxNzUxNmNhNDI4ZmViNGVmMzUwMGIyYTFhNGRhMGJkMzFiY2U1YWQ3MmYzOzStDQ==: 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: ]] 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjBlMWNlNzdiOGM3YmYyYmRkOTMzZDQzOGQzNjFhMTIwMWMxYmY2NTZjOGUxOWIxqJdUtQ==: 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.758 nvme0n1 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWFiMDYzNGE1MzQzMDJkYTUyY2Y2MmM2NmEzM2Y1ZWIgnP11: 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: ]] 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjU0ZDBlNWNiZTY2MzdhMzFmYjI3NTA0NGQ4ZGM3NDlm6n8K: 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.758 request: 00:21:39.758 { 00:21:39.758 "name": "nvme0", 00:21:39.758 "dhchap_key": "key2", 00:21:39.758 "dhchap_ctrlr_key": "ckey1", 00:21:39.758 "method": "bdev_nvme_set_keys", 00:21:39.758 "req_id": 1 00:21:39.758 } 00:21:39.758 Got JSON-RPC error response 00:21:39.758 response: 00:21:39.758 { 00:21:39.758 "code": -13, 00:21:39.758 "message": "Permission denied" 00:21:39.758 } 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:39.758 02:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:40.695 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.695 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:40.695 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.695 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.695 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.695 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:40.695 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:40.695 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:40.695 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:40.695 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:40.695 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:40.954 rmmod nvme_tcp 00:21:40.954 rmmod nvme_fabrics 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 93621 ']' 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 93621 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 93621 ']' 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 93621 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93621 00:21:40.954 killing process with pid 93621 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93621' 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 93621 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 93621 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:40.954 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:41.214 02:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:41.214 02:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:41.214 02:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:41.214 02:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:21:41.214 02:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:41.214 02:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:41.214 02:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:41.214 02:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:41.214 02:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:21:41.214 02:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:21:41.214 02:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:42.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:42.151 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:42.151 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:42.151 02:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.wFS /tmp/spdk.key-null.TKy /tmp/spdk.key-sha256.GVK /tmp/spdk.key-sha384.59K /tmp/spdk.key-sha512.BXb /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:42.151 02:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:42.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:42.410 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:42.410 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:42.669 ************************************ 00:21:42.669 END TEST nvmf_auth_host 00:21:42.669 ************************************ 00:21:42.669 00:21:42.669 real 0m35.099s 00:21:42.669 user 0m32.389s 00:21:42.669 sys 0m3.809s 00:21:42.669 02:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:42.669 02:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.669 02:25:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:42.670 02:25:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:42.670 02:25:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:42.670 02:25:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:42.670 02:25:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.670 ************************************ 00:21:42.670 START TEST nvmf_digest 00:21:42.670 ************************************ 00:21:42.670 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:42.670 * Looking for test storage... 00:21:42.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:42.670 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:42.670 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:21:42.670 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:42.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.930 --rc genhtml_branch_coverage=1 00:21:42.930 --rc genhtml_function_coverage=1 00:21:42.930 --rc genhtml_legend=1 00:21:42.930 --rc geninfo_all_blocks=1 00:21:42.930 --rc geninfo_unexecuted_blocks=1 00:21:42.930 00:21:42.930 ' 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:42.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.930 --rc genhtml_branch_coverage=1 00:21:42.930 --rc genhtml_function_coverage=1 00:21:42.930 --rc genhtml_legend=1 00:21:42.930 --rc geninfo_all_blocks=1 00:21:42.930 --rc geninfo_unexecuted_blocks=1 00:21:42.930 00:21:42.930 ' 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:42.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.930 --rc genhtml_branch_coverage=1 00:21:42.930 --rc genhtml_function_coverage=1 00:21:42.930 --rc genhtml_legend=1 00:21:42.930 --rc geninfo_all_blocks=1 00:21:42.930 --rc geninfo_unexecuted_blocks=1 00:21:42.930 00:21:42.930 ' 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:42.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.930 --rc genhtml_branch_coverage=1 00:21:42.930 --rc genhtml_function_coverage=1 00:21:42.930 --rc genhtml_legend=1 00:21:42.930 --rc geninfo_all_blocks=1 00:21:42.930 --rc geninfo_unexecuted_blocks=1 00:21:42.930 00:21:42.930 ' 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.930 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:42.931 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:42.931 Cannot find device "nvmf_init_br" 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:42.931 Cannot find device "nvmf_init_br2" 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:42.931 Cannot find device "nvmf_tgt_br" 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:42.931 Cannot find device "nvmf_tgt_br2" 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:42.931 Cannot find device "nvmf_init_br" 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:42.931 Cannot find device "nvmf_init_br2" 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:42.931 Cannot find device "nvmf_tgt_br" 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:42.931 Cannot find device "nvmf_tgt_br2" 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:42.931 Cannot find device "nvmf_br" 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:42.931 Cannot find device "nvmf_init_if" 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:42.931 Cannot find device "nvmf_init_if2" 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:42.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:42.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:42.931 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:43.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:43.191 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:21:43.191 00:21:43.191 --- 10.0.0.3 ping statistics --- 00:21:43.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.191 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:43.191 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:43.191 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:21:43.191 00:21:43.191 --- 10.0.0.4 ping statistics --- 00:21:43.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.191 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:43.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:21:43.191 00:21:43.191 --- 10.0.0.1 ping statistics --- 00:21:43.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.191 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:43.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:21:43.191 00:21:43.191 --- 10.0.0.2 ping statistics --- 00:21:43.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.191 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:43.191 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # return 0 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:43.192 02:25:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:43.192 ************************************ 00:21:43.192 START TEST nvmf_digest_clean 00:21:43.192 ************************************ 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:43.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=95277 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 95277 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95277 ']' 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.192 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:43.451 [2024-11-08 02:25:45.080010] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:43.451 [2024-11-08 02:25:45.080129] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.451 [2024-11-08 02:25:45.223179] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.451 [2024-11-08 02:25:45.266212] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.451 [2024-11-08 02:25:45.266279] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.451 [2024-11-08 02:25:45.266293] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.451 [2024-11-08 02:25:45.266304] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.451 [2024-11-08 02:25:45.266313] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.451 [2024-11-08 02:25:45.266349] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.451 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.451 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:43.451 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:43.451 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:43.451 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:43.710 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.710 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:43.710 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:43.711 [2024-11-08 02:25:45.406734] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:43.711 null0 00:21:43.711 [2024-11-08 02:25:45.442903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.711 [2024-11-08 02:25:45.467061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95300 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95300 /var/tmp/bperf.sock 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95300 ']' 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:43.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.711 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:43.711 [2024-11-08 02:25:45.535336] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:43.711 [2024-11-08 02:25:45.535586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95300 ] 00:21:43.969 [2024-11-08 02:25:45.675231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.969 [2024-11-08 02:25:45.718573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.969 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.969 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:43.969 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:43.969 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:43.969 02:25:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:44.228 [2024-11-08 02:25:46.086675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:44.487 02:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.487 02:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.745 nvme0n1 00:21:44.745 02:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:44.745 02:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:44.745 Running I/O for 2 seconds... 00:21:47.056 17399.00 IOPS, 67.96 MiB/s [2024-11-08T02:25:48.940Z] 17589.50 IOPS, 68.71 MiB/s 00:21:47.056 Latency(us) 00:21:47.056 [2024-11-08T02:25:48.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.056 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:47.056 nvme0n1 : 2.01 17618.00 68.82 0.00 0.00 7260.52 6642.97 22520.55 00:21:47.056 [2024-11-08T02:25:48.940Z] =================================================================================================================== 00:21:47.056 [2024-11-08T02:25:48.940Z] Total : 17618.00 68.82 0.00 0.00 7260.52 6642.97 22520.55 00:21:47.056 { 00:21:47.056 "results": [ 00:21:47.056 { 00:21:47.056 "job": "nvme0n1", 00:21:47.056 "core_mask": "0x2", 00:21:47.056 "workload": "randread", 00:21:47.056 "status": "finished", 00:21:47.056 "queue_depth": 128, 00:21:47.056 "io_size": 4096, 00:21:47.056 "runtime": 2.011238, 00:21:47.056 "iops": 17618.004433090464, 00:21:47.056 "mibps": 68.82032981675962, 00:21:47.056 "io_failed": 0, 00:21:47.056 "io_timeout": 0, 00:21:47.056 "avg_latency_us": 7260.515875558657, 00:21:47.056 "min_latency_us": 6642.967272727273, 00:21:47.056 "max_latency_us": 22520.552727272727 00:21:47.056 } 00:21:47.056 ], 00:21:47.056 "core_count": 1 00:21:47.056 } 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:47.056 | select(.opcode=="crc32c") 00:21:47.056 | "\(.module_name) \(.executed)"' 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95300 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95300 ']' 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95300 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95300 00:21:47.056 killing process with pid 95300 00:21:47.056 Received shutdown signal, test time was about 2.000000 seconds 00:21:47.056 00:21:47.056 Latency(us) 00:21:47.056 [2024-11-08T02:25:48.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.056 [2024-11-08T02:25:48.940Z] =================================================================================================================== 00:21:47.056 [2024-11-08T02:25:48.940Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95300' 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95300 00:21:47.056 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95300 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95350 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95350 /var/tmp/bperf.sock 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95350 ']' 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:47.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:47.315 02:25:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:47.315 [2024-11-08 02:25:49.035501] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:47.315 [2024-11-08 02:25:49.035761] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:21:47.315 Zero copy mechanism will not be used. 00:21:47.315 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95350 ] 00:21:47.315 [2024-11-08 02:25:49.165916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.573 [2024-11-08 02:25:49.201255] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.573 02:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.573 02:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:47.573 02:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:47.573 02:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:47.573 02:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:47.833 [2024-11-08 02:25:49.584278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:47.833 02:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:47.833 02:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.092 nvme0n1 00:21:48.092 02:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:48.092 02:25:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:48.350 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:48.350 Zero copy mechanism will not be used. 00:21:48.350 Running I/O for 2 seconds... 00:21:50.221 8672.00 IOPS, 1084.00 MiB/s [2024-11-08T02:25:52.105Z] 8712.00 IOPS, 1089.00 MiB/s 00:21:50.221 Latency(us) 00:21:50.221 [2024-11-08T02:25:52.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.221 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:50.221 nvme0n1 : 2.00 8707.04 1088.38 0.00 0.00 1834.79 1653.29 3470.43 00:21:50.221 [2024-11-08T02:25:52.105Z] =================================================================================================================== 00:21:50.221 [2024-11-08T02:25:52.105Z] Total : 8707.04 1088.38 0.00 0.00 1834.79 1653.29 3470.43 00:21:50.221 { 00:21:50.221 "results": [ 00:21:50.221 { 00:21:50.221 "job": "nvme0n1", 00:21:50.221 "core_mask": "0x2", 00:21:50.221 "workload": "randread", 00:21:50.221 "status": "finished", 00:21:50.221 "queue_depth": 16, 00:21:50.221 "io_size": 131072, 00:21:50.221 "runtime": 2.002977, 00:21:50.221 "iops": 8707.039571597677, 00:21:50.221 "mibps": 1088.3799464497097, 00:21:50.221 "io_failed": 0, 00:21:50.221 "io_timeout": 0, 00:21:50.221 "avg_latency_us": 1834.7944887406172, 00:21:50.221 "min_latency_us": 1653.2945454545454, 00:21:50.221 "max_latency_us": 3470.429090909091 00:21:50.221 } 00:21:50.221 ], 00:21:50.221 "core_count": 1 00:21:50.221 } 00:21:50.221 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:50.221 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:50.221 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:50.221 | select(.opcode=="crc32c") 00:21:50.221 | "\(.module_name) \(.executed)"' 00:21:50.221 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:50.221 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95350 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95350 ']' 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95350 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95350 00:21:50.788 killing process with pid 95350 00:21:50.788 Received shutdown signal, test time was about 2.000000 seconds 00:21:50.788 00:21:50.788 Latency(us) 00:21:50.788 [2024-11-08T02:25:52.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.788 [2024-11-08T02:25:52.672Z] =================================================================================================================== 00:21:50.788 [2024-11-08T02:25:52.672Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95350' 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95350 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95350 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95396 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95396 /var/tmp/bperf.sock 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95396 ']' 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:50.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:50.788 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:50.788 [2024-11-08 02:25:52.597319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:50.788 [2024-11-08 02:25:52.597589] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95396 ] 00:21:51.047 [2024-11-08 02:25:52.732492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.047 [2024-11-08 02:25:52.766059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.047 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.047 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:51.047 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:51.047 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:51.047 02:25:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:51.307 [2024-11-08 02:25:53.100588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:51.307 02:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:51.307 02:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:51.874 nvme0n1 00:21:51.874 02:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:51.874 02:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:51.874 Running I/O for 2 seconds... 00:21:53.742 18924.00 IOPS, 73.92 MiB/s [2024-11-08T02:25:55.626Z] 19050.50 IOPS, 74.42 MiB/s 00:21:53.742 Latency(us) 00:21:53.742 [2024-11-08T02:25:55.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.742 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:53.742 nvme0n1 : 2.01 19091.21 74.58 0.00 0.00 6699.53 6255.71 15252.01 00:21:53.742 [2024-11-08T02:25:55.626Z] =================================================================================================================== 00:21:53.742 [2024-11-08T02:25:55.626Z] Total : 19091.21 74.58 0.00 0.00 6699.53 6255.71 15252.01 00:21:53.742 { 00:21:53.742 "results": [ 00:21:53.742 { 00:21:53.742 "job": "nvme0n1", 00:21:53.742 "core_mask": "0x2", 00:21:53.742 "workload": "randwrite", 00:21:53.742 "status": "finished", 00:21:53.742 "queue_depth": 128, 00:21:53.742 "io_size": 4096, 00:21:53.742 "runtime": 2.009092, 00:21:53.742 "iops": 19091.211353188406, 00:21:53.742 "mibps": 74.57504434839221, 00:21:53.742 "io_failed": 0, 00:21:53.742 "io_timeout": 0, 00:21:53.742 "avg_latency_us": 6699.52817906882, 00:21:53.742 "min_latency_us": 6255.709090909091, 00:21:53.742 "max_latency_us": 15252.014545454545 00:21:53.742 } 00:21:53.742 ], 00:21:53.742 "core_count": 1 00:21:53.742 } 00:21:53.742 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:53.742 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:53.742 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:53.742 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:53.742 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:53.742 | select(.opcode=="crc32c") 00:21:53.742 | "\(.module_name) \(.executed)"' 00:21:54.000 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:54.000 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:54.000 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:54.000 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:54.000 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95396 00:21:54.000 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95396 ']' 00:21:54.000 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95396 00:21:54.000 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:54.000 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:54.000 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95396 00:21:54.258 killing process with pid 95396 00:21:54.258 Received shutdown signal, test time was about 2.000000 seconds 00:21:54.258 00:21:54.258 Latency(us) 00:21:54.258 [2024-11-08T02:25:56.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.258 [2024-11-08T02:25:56.142Z] =================================================================================================================== 00:21:54.258 [2024-11-08T02:25:56.142Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.258 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:54.258 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:54.258 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95396' 00:21:54.258 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95396 00:21:54.258 02:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95396 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95444 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95444 /var/tmp/bperf.sock 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95444 ']' 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:54.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.258 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:54.258 [2024-11-08 02:25:56.095322] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:54.258 [2024-11-08 02:25:56.095585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95444 ] 00:21:54.259 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:54.259 Zero copy mechanism will not be used. 00:21:54.517 [2024-11-08 02:25:56.235495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.517 [2024-11-08 02:25:56.269669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.517 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.517 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:54.517 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:54.517 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:54.517 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:54.775 [2024-11-08 02:25:56.549225] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:54.775 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:54.775 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:55.033 nvme0n1 00:21:55.292 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:55.292 02:25:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:55.292 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:55.292 Zero copy mechanism will not be used. 00:21:55.292 Running I/O for 2 seconds... 00:21:57.164 7419.00 IOPS, 927.38 MiB/s [2024-11-08T02:25:59.048Z] 7410.00 IOPS, 926.25 MiB/s 00:21:57.164 Latency(us) 00:21:57.164 [2024-11-08T02:25:59.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.164 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:57.164 nvme0n1 : 2.00 7407.99 926.00 0.00 0.00 2154.95 1563.93 5987.61 00:21:57.164 [2024-11-08T02:25:59.048Z] =================================================================================================================== 00:21:57.164 [2024-11-08T02:25:59.048Z] Total : 7407.99 926.00 0.00 0.00 2154.95 1563.93 5987.61 00:21:57.164 { 00:21:57.164 "results": [ 00:21:57.164 { 00:21:57.164 "job": "nvme0n1", 00:21:57.164 "core_mask": "0x2", 00:21:57.164 "workload": "randwrite", 00:21:57.164 "status": "finished", 00:21:57.164 "queue_depth": 16, 00:21:57.164 "io_size": 131072, 00:21:57.164 "runtime": 2.003648, 00:21:57.164 "iops": 7407.98783019772, 00:21:57.164 "mibps": 925.998478774715, 00:21:57.164 "io_failed": 0, 00:21:57.164 "io_timeout": 0, 00:21:57.164 "avg_latency_us": 2154.9523793891212, 00:21:57.164 "min_latency_us": 1563.9272727272728, 00:21:57.164 "max_latency_us": 5987.607272727273 00:21:57.164 } 00:21:57.164 ], 00:21:57.164 "core_count": 1 00:21:57.164 } 00:21:57.164 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:57.164 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:57.422 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:57.422 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:57.422 | select(.opcode=="crc32c") 00:21:57.422 | "\(.module_name) \(.executed)"' 00:21:57.422 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95444 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95444 ']' 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95444 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95444 00:21:57.681 killing process with pid 95444 00:21:57.681 Received shutdown signal, test time was about 2.000000 seconds 00:21:57.681 00:21:57.681 Latency(us) 00:21:57.681 [2024-11-08T02:25:59.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.681 [2024-11-08T02:25:59.565Z] =================================================================================================================== 00:21:57.681 [2024-11-08T02:25:59.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95444' 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95444 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95444 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 95277 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95277 ']' 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95277 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95277 00:21:57.681 killing process with pid 95277 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95277' 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95277 00:21:57.681 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95277 00:21:57.940 ************************************ 00:21:57.941 END TEST nvmf_digest_clean 00:21:57.941 ************************************ 00:21:57.941 00:21:57.941 real 0m14.632s 00:21:57.941 user 0m28.445s 00:21:57.941 sys 0m4.321s 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:57.941 ************************************ 00:21:57.941 START TEST nvmf_digest_error 00:21:57.941 ************************************ 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=95520 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 95520 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95520 ']' 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:57.941 02:25:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:57.941 [2024-11-08 02:25:59.763550] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:57.941 [2024-11-08 02:25:59.763644] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.206 [2024-11-08 02:25:59.900070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.206 [2024-11-08 02:25:59.932522] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.206 [2024-11-08 02:25:59.932570] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.207 [2024-11-08 02:25:59.932595] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.207 [2024-11-08 02:25:59.932602] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.207 [2024-11-08 02:25:59.932608] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.207 [2024-11-08 02:25:59.932632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:59.143 [2024-11-08 02:26:00.753071] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:59.143 [2024-11-08 02:26:00.791668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:59.143 null0 00:21:59.143 [2024-11-08 02:26:00.822542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.143 [2024-11-08 02:26:00.846648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95552 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95552 /var/tmp/bperf.sock 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95552 ']' 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:59.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:59.143 02:26:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:59.143 [2024-11-08 02:26:00.912127] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:59.143 [2024-11-08 02:26:00.912415] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95552 ] 00:21:59.402 [2024-11-08 02:26:01.054559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.402 [2024-11-08 02:26:01.097158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.402 [2024-11-08 02:26:01.130640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:00.337 02:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:00.337 02:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:22:00.337 02:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:00.337 02:26:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:00.337 02:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:00.337 02:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.337 02:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:00.337 02:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.337 02:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:00.337 02:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:00.596 nvme0n1 00:22:00.596 02:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:00.596 02:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.596 02:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:00.596 02:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.596 02:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:00.596 02:26:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:00.855 Running I/O for 2 seconds... 00:22:00.855 [2024-11-08 02:26:02.513589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.513634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.513664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.528697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.528734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.528763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.542918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.542978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.543007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.557242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.557277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.557305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.571408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.571443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.571471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.585648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.585698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.585726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.599825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.599859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.599888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.613871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.613905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.613932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.628038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.628073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.628101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.642213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.642248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.642276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.656683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.656717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.656745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.670780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.670814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.670842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.685003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.685038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.685066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.699129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.699174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.699202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.713313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.713500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.713516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:00.855 [2024-11-08 02:26:02.727816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:00.855 [2024-11-08 02:26:02.728014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:00.855 [2024-11-08 02:26:02.728030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.114 [2024-11-08 02:26:02.743131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.114 [2024-11-08 02:26:02.743180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.114 [2024-11-08 02:26:02.743209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.114 [2024-11-08 02:26:02.757266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.114 [2024-11-08 02:26:02.757447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.114 [2024-11-08 02:26:02.757463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.114 [2024-11-08 02:26:02.771702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.114 [2024-11-08 02:26:02.771882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.114 [2024-11-08 02:26:02.771900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.114 [2024-11-08 02:26:02.785977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.114 [2024-11-08 02:26:02.786012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.114 [2024-11-08 02:26:02.786040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.114 [2024-11-08 02:26:02.800059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.114 [2024-11-08 02:26:02.800094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.114 [2024-11-08 02:26:02.800151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.114 [2024-11-08 02:26:02.814078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.114 [2024-11-08 02:26:02.814154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.114 [2024-11-08 02:26:02.814183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.114 [2024-11-08 02:26:02.828225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.114 [2024-11-08 02:26:02.828259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.114 [2024-11-08 02:26:02.828286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.114 [2024-11-08 02:26:02.842418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.114 [2024-11-08 02:26:02.842452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.114 [2024-11-08 02:26:02.842480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.114 [2024-11-08 02:26:02.856793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.114 [2024-11-08 02:26:02.856827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.114 [2024-11-08 02:26:02.856855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.114 [2024-11-08 02:26:02.871159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.115 [2024-11-08 02:26:02.871372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.115 [2024-11-08 02:26:02.871389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.115 [2024-11-08 02:26:02.886234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.115 [2024-11-08 02:26:02.886420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.115 [2024-11-08 02:26:02.886437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.115 [2024-11-08 02:26:02.901377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.115 [2024-11-08 02:26:02.901555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.115 [2024-11-08 02:26:02.901571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.115 [2024-11-08 02:26:02.915589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.115 [2024-11-08 02:26:02.915769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.115 [2024-11-08 02:26:02.915785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.115 [2024-11-08 02:26:02.929866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.115 [2024-11-08 02:26:02.929902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.115 [2024-11-08 02:26:02.929929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.115 [2024-11-08 02:26:02.944019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.115 [2024-11-08 02:26:02.944053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.115 [2024-11-08 02:26:02.944080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.115 [2024-11-08 02:26:02.957924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.115 [2024-11-08 02:26:02.957959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.115 [2024-11-08 02:26:02.957986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.115 [2024-11-08 02:26:02.971920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.115 [2024-11-08 02:26:02.972134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.115 [2024-11-08 02:26:02.972153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.115 [2024-11-08 02:26:02.986014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.115 [2024-11-08 02:26:02.986049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.115 [2024-11-08 02:26:02.986076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.374 [2024-11-08 02:26:03.001146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.374 [2024-11-08 02:26:03.001182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.374 [2024-11-08 02:26:03.001210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.374 [2024-11-08 02:26:03.015190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.374 [2024-11-08 02:26:03.015423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.374 [2024-11-08 02:26:03.015439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.374 [2024-11-08 02:26:03.029493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.374 [2024-11-08 02:26:03.029528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.374 [2024-11-08 02:26:03.029556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.374 [2024-11-08 02:26:03.043543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.374 [2024-11-08 02:26:03.043576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.374 [2024-11-08 02:26:03.043603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.374 [2024-11-08 02:26:03.057486] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.374 [2024-11-08 02:26:03.057519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.057546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.375 [2024-11-08 02:26:03.071637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.375 [2024-11-08 02:26:03.071671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.071699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.375 [2024-11-08 02:26:03.085539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.375 [2024-11-08 02:26:03.085573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.085600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.375 [2024-11-08 02:26:03.099597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.375 [2024-11-08 02:26:03.099630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.099657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.375 [2024-11-08 02:26:03.113530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.375 [2024-11-08 02:26:03.113564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.113591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.375 [2024-11-08 02:26:03.127572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.375 [2024-11-08 02:26:03.127604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.127631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.375 [2024-11-08 02:26:03.142246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.375 [2024-11-08 02:26:03.142429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.142445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.375 [2024-11-08 02:26:03.157260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.375 [2024-11-08 02:26:03.157443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.157459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.375 [2024-11-08 02:26:03.171959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.375 [2024-11-08 02:26:03.172167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.172186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.375 [2024-11-08 02:26:03.186117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.375 [2024-11-08 02:26:03.186150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.186177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.375 [2024-11-08 02:26:03.200215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.375 [2024-11-08 02:26:03.200250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.200278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.375 [2024-11-08 02:26:03.214232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.375 [2024-11-08 02:26:03.214265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.214292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.375 [2024-11-08 02:26:03.228253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.375 [2024-11-08 02:26:03.228287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.228315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.375 [2024-11-08 02:26:03.242290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.375 [2024-11-08 02:26:03.242323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.375 [2024-11-08 02:26:03.242350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.634 [2024-11-08 02:26:03.256939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.634 [2024-11-08 02:26:03.256999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.634 [2024-11-08 02:26:03.257033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.634 [2024-11-08 02:26:03.271401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.634 [2024-11-08 02:26:03.271600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.634 [2024-11-08 02:26:03.271618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.634 [2024-11-08 02:26:03.285707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.634 [2024-11-08 02:26:03.285743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.634 [2024-11-08 02:26:03.285770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.634 [2024-11-08 02:26:03.301221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.634 [2024-11-08 02:26:03.301259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.634 [2024-11-08 02:26:03.301272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.634 [2024-11-08 02:26:03.318303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.634 [2024-11-08 02:26:03.318343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.634 [2024-11-08 02:26:03.318373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.634 [2024-11-08 02:26:03.334294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.634 [2024-11-08 02:26:03.334330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.634 [2024-11-08 02:26:03.334358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.634 [2024-11-08 02:26:03.349910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.634 [2024-11-08 02:26:03.349945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.635 [2024-11-08 02:26:03.349973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.635 [2024-11-08 02:26:03.364517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.635 [2024-11-08 02:26:03.364552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.635 [2024-11-08 02:26:03.364580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.635 [2024-11-08 02:26:03.378520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.635 [2024-11-08 02:26:03.378553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.635 [2024-11-08 02:26:03.378581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.635 [2024-11-08 02:26:03.393692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.635 [2024-11-08 02:26:03.393727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.635 [2024-11-08 02:26:03.393755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.635 [2024-11-08 02:26:03.409687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.635 [2024-11-08 02:26:03.409739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.635 [2024-11-08 02:26:03.409767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.635 [2024-11-08 02:26:03.432088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.635 [2024-11-08 02:26:03.432324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.635 [2024-11-08 02:26:03.432342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.635 [2024-11-08 02:26:03.447599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.635 [2024-11-08 02:26:03.447781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.635 [2024-11-08 02:26:03.447797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.635 [2024-11-08 02:26:03.462785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.635 [2024-11-08 02:26:03.463011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.635 [2024-11-08 02:26:03.463029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.635 [2024-11-08 02:26:03.478339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.635 [2024-11-08 02:26:03.478375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.635 [2024-11-08 02:26:03.478403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.635 17332.00 IOPS, 67.70 MiB/s [2024-11-08T02:26:03.519Z] [2024-11-08 02:26:03.494822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.635 [2024-11-08 02:26:03.494859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.635 [2024-11-08 02:26:03.494873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.635 [2024-11-08 02:26:03.509667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.635 [2024-11-08 02:26:03.509703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.635 [2024-11-08 02:26:03.509730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.894 [2024-11-08 02:26:03.525637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.894 [2024-11-08 02:26:03.525677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.894 [2024-11-08 02:26:03.525706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.894 [2024-11-08 02:26:03.541257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.894 [2024-11-08 02:26:03.541294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.894 [2024-11-08 02:26:03.541323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.894 [2024-11-08 02:26:03.556434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.894 [2024-11-08 02:26:03.556470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.894 [2024-11-08 02:26:03.556498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.894 [2024-11-08 02:26:03.571259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.894 [2024-11-08 02:26:03.571309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.894 [2024-11-08 02:26:03.571337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.894 [2024-11-08 02:26:03.586017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.894 [2024-11-08 02:26:03.586053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.894 [2024-11-08 02:26:03.586081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.894 [2024-11-08 02:26:03.601184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.894 [2024-11-08 02:26:03.601372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.895 [2024-11-08 02:26:03.601388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.895 [2024-11-08 02:26:03.616598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.895 [2024-11-08 02:26:03.616633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.895 [2024-11-08 02:26:03.616661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.895 [2024-11-08 02:26:03.630843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.895 [2024-11-08 02:26:03.630877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.895 [2024-11-08 02:26:03.630905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.895 [2024-11-08 02:26:03.645263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.895 [2024-11-08 02:26:03.645295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.895 [2024-11-08 02:26:03.645307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.895 [2024-11-08 02:26:03.659523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.895 [2024-11-08 02:26:03.659703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.895 [2024-11-08 02:26:03.659719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.895 [2024-11-08 02:26:03.674010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.895 [2024-11-08 02:26:03.674045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.895 [2024-11-08 02:26:03.674073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.895 [2024-11-08 02:26:03.688209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.895 [2024-11-08 02:26:03.688243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.895 [2024-11-08 02:26:03.688271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.895 [2024-11-08 02:26:03.702312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.895 [2024-11-08 02:26:03.702346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.895 [2024-11-08 02:26:03.702374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.895 [2024-11-08 02:26:03.716779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.895 [2024-11-08 02:26:03.716813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.895 [2024-11-08 02:26:03.716840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.895 [2024-11-08 02:26:03.731163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.895 [2024-11-08 02:26:03.731200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.895 [2024-11-08 02:26:03.731229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.895 [2024-11-08 02:26:03.745620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.895 [2024-11-08 02:26:03.745655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.895 [2024-11-08 02:26:03.745683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.895 [2024-11-08 02:26:03.759727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.895 [2024-11-08 02:26:03.759761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.895 [2024-11-08 02:26:03.759788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:01.895 [2024-11-08 02:26:03.774115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:01.895 [2024-11-08 02:26:03.774198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.895 [2024-11-08 02:26:03.774214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.154 [2024-11-08 02:26:03.789237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.154 [2024-11-08 02:26:03.789425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.154 [2024-11-08 02:26:03.789442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.154 [2024-11-08 02:26:03.803611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.154 [2024-11-08 02:26:03.803792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.154 [2024-11-08 02:26:03.803808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.154 [2024-11-08 02:26:03.818018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.154 [2024-11-08 02:26:03.818053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.154 [2024-11-08 02:26:03.818082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.154 [2024-11-08 02:26:03.832248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.154 [2024-11-08 02:26:03.832281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.154 [2024-11-08 02:26:03.832309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.154 [2024-11-08 02:26:03.847127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.154 [2024-11-08 02:26:03.847173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.154 [2024-11-08 02:26:03.847202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.154 [2024-11-08 02:26:03.861408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.154 [2024-11-08 02:26:03.861443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.155 [2024-11-08 02:26:03.861470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.155 [2024-11-08 02:26:03.876013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.155 [2024-11-08 02:26:03.876049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.155 [2024-11-08 02:26:03.876078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.155 [2024-11-08 02:26:03.890793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.155 [2024-11-08 02:26:03.891018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.155 [2024-11-08 02:26:03.891036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.155 [2024-11-08 02:26:03.905788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.155 [2024-11-08 02:26:03.905967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.155 [2024-11-08 02:26:03.905983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.155 [2024-11-08 02:26:03.920214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.155 [2024-11-08 02:26:03.920248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.155 [2024-11-08 02:26:03.920276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.155 [2024-11-08 02:26:03.934373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.155 [2024-11-08 02:26:03.934411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.155 [2024-11-08 02:26:03.934440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.155 [2024-11-08 02:26:03.948722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.155 [2024-11-08 02:26:03.948756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.155 [2024-11-08 02:26:03.948784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.155 [2024-11-08 02:26:03.962890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.155 [2024-11-08 02:26:03.962924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.155 [2024-11-08 02:26:03.962991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.155 [2024-11-08 02:26:03.977010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.155 [2024-11-08 02:26:03.977044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.155 [2024-11-08 02:26:03.977072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.155 [2024-11-08 02:26:03.991087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.155 [2024-11-08 02:26:03.991149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.155 [2024-11-08 02:26:03.991182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.155 [2024-11-08 02:26:04.005292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.155 [2024-11-08 02:26:04.005328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.155 [2024-11-08 02:26:04.005355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.155 [2024-11-08 02:26:04.019411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.155 [2024-11-08 02:26:04.019592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.155 [2024-11-08 02:26:04.019608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.155 [2024-11-08 02:26:04.034319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.155 [2024-11-08 02:26:04.034522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.155 [2024-11-08 02:26:04.034540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.049498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.049698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.049714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.063978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.064167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.064184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.078305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.078340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.078368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.092776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.092810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.092837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.106899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.106939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.106967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.121394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.121428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.121456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.135876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.135912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.135940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.151012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.151232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.151251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.166002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.166037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.166065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.180143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.180175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.180203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.194253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.194286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.194314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.208353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.208387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.208414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.222446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.222479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.222506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.236635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.236667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.236695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.250838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.250874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.250901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.265030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.414 [2024-11-08 02:26:04.265065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.414 [2024-11-08 02:26:04.265093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.414 [2024-11-08 02:26:04.279232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.415 [2024-11-08 02:26:04.279298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.415 [2024-11-08 02:26:04.279326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.415 [2024-11-08 02:26:04.293525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.415 [2024-11-08 02:26:04.293560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.415 [2024-11-08 02:26:04.293588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.673 [2024-11-08 02:26:04.308511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.673 [2024-11-08 02:26:04.308547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.673 [2024-11-08 02:26:04.308575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.673 [2024-11-08 02:26:04.323842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.673 [2024-11-08 02:26:04.324021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.673 [2024-11-08 02:26:04.324038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.673 [2024-11-08 02:26:04.341692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.673 [2024-11-08 02:26:04.341726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.673 [2024-11-08 02:26:04.341754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.673 [2024-11-08 02:26:04.358078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.673 [2024-11-08 02:26:04.358139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.673 [2024-11-08 02:26:04.358169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.673 [2024-11-08 02:26:04.380274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.673 [2024-11-08 02:26:04.380311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.674 [2024-11-08 02:26:04.380339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.674 [2024-11-08 02:26:04.394526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.674 [2024-11-08 02:26:04.394560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.674 [2024-11-08 02:26:04.394587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.674 [2024-11-08 02:26:04.408734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.674 [2024-11-08 02:26:04.408767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.674 [2024-11-08 02:26:04.408795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.674 [2024-11-08 02:26:04.422976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.674 [2024-11-08 02:26:04.423014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.674 [2024-11-08 02:26:04.423042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.674 [2024-11-08 02:26:04.437298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.674 [2024-11-08 02:26:04.437331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.674 [2024-11-08 02:26:04.437358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.674 [2024-11-08 02:26:04.451661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.674 [2024-11-08 02:26:04.451842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.674 [2024-11-08 02:26:04.451857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.674 [2024-11-08 02:26:04.466168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.674 [2024-11-08 02:26:04.466348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.674 [2024-11-08 02:26:04.466366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.674 [2024-11-08 02:26:04.480980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.674 [2024-11-08 02:26:04.481190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.674 [2024-11-08 02:26:04.481207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.674 17268.00 IOPS, 67.45 MiB/s [2024-11-08T02:26:04.558Z] [2024-11-08 02:26:04.496968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdff5a0) 00:22:02.674 [2024-11-08 02:26:04.497003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.674 [2024-11-08 02:26:04.497031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:02.674 00:22:02.674 Latency(us) 00:22:02.674 [2024-11-08T02:26:04.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.674 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:02.674 nvme0n1 : 2.01 17299.38 67.58 0.00 0.00 7393.84 6732.33 29669.93 00:22:02.674 [2024-11-08T02:26:04.558Z] =================================================================================================================== 00:22:02.674 [2024-11-08T02:26:04.558Z] Total : 17299.38 67.58 0.00 0.00 7393.84 6732.33 29669.93 00:22:02.674 { 00:22:02.674 "results": [ 00:22:02.674 { 00:22:02.674 "job": "nvme0n1", 00:22:02.674 "core_mask": "0x2", 00:22:02.674 "workload": "randread", 00:22:02.674 "status": "finished", 00:22:02.674 "queue_depth": 128, 00:22:02.674 "io_size": 4096, 00:22:02.674 "runtime": 2.011113, 00:22:02.674 "iops": 17299.37601716065, 00:22:02.674 "mibps": 67.57568756703378, 00:22:02.674 "io_failed": 0, 00:22:02.674 "io_timeout": 0, 00:22:02.674 "avg_latency_us": 7393.8379942566135, 00:22:02.674 "min_latency_us": 6732.334545454545, 00:22:02.674 "max_latency_us": 29669.934545454544 00:22:02.674 } 00:22:02.674 ], 00:22:02.674 "core_count": 1 00:22:02.674 } 00:22:02.674 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:02.674 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:02.674 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:02.674 | .driver_specific 00:22:02.674 | .nvme_error 00:22:02.674 | .status_code 00:22:02.674 | .command_transient_transport_error' 00:22:02.674 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:02.932 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:22:02.932 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95552 00:22:02.932 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95552 ']' 00:22:02.932 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95552 00:22:02.932 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:22:02.932 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.932 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95552 00:22:03.192 killing process with pid 95552 00:22:03.192 Received shutdown signal, test time was about 2.000000 seconds 00:22:03.192 00:22:03.192 Latency(us) 00:22:03.192 [2024-11-08T02:26:05.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.192 [2024-11-08T02:26:05.076Z] =================================================================================================================== 00:22:03.192 [2024-11-08T02:26:05.076Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95552' 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95552 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95552 00:22:03.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95612 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95612 /var/tmp/bperf.sock 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95612 ']' 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.192 02:26:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:03.192 [2024-11-08 02:26:05.033957] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:03.192 [2024-11-08 02:26:05.034294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95612 ] 00:22:03.192 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:03.192 Zero copy mechanism will not be used. 00:22:03.450 [2024-11-08 02:26:05.165588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.450 [2024-11-08 02:26:05.198744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.450 [2024-11-08 02:26:05.227339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:04.385 02:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:04.385 02:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:22:04.385 02:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:04.385 02:26:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:04.385 02:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:04.385 02:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.385 02:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:04.385 02:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.385 02:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:04.385 02:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:04.976 nvme0n1 00:22:04.976 02:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:04.976 02:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.976 02:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:04.977 02:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.977 02:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:04.977 02:26:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:04.977 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:04.977 Zero copy mechanism will not be used. 00:22:04.977 Running I/O for 2 seconds... 00:22:04.977 [2024-11-08 02:26:06.682228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.682274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.682303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.686314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.686352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.686365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.690375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.690412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.690424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.694408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.694446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.694476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.698243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.698277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.698306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.702074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.702308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.702326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.706232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.706268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.706297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.710159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.710195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.710224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.714015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.714236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.714254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.718141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.718187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.718215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.722084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.722293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.722311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.726278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.726315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.726328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.730219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.730256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.730268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.734148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.734194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.734208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.738044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.738280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.738298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.742248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.742286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.742299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.746261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.746297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.746310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.750176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.750212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.750225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.753972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.754183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.754200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.758022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.758237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.758254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.762135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.762168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.762196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.766038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.766242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.766258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.770263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.770300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.770313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.774225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.774262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.774274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.778128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.977 [2024-11-08 02:26:06.778163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.977 [2024-11-08 02:26:06.778175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.977 [2024-11-08 02:26:06.782021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.782225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.782242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.786111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.786161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.786174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.790006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.790190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.790206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.794069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.794273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.794291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.798337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.798376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.798389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.802307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.802344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.802357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.806207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.806243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.806255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.810024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.810235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.810252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.814108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.814155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.814184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.818008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.818220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.818237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.822186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.822223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.822235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.825994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.826206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.826223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.830141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.830193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.830222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.835385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.835453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.835490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:04.978 [2024-11-08 02:26:06.841175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:04.978 [2024-11-08 02:26:06.841245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:04.978 [2024-11-08 02:26:06.841271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.290 [2024-11-08 02:26:06.847401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.290 [2024-11-08 02:26:06.847472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.290 [2024-11-08 02:26:06.847495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.290 [2024-11-08 02:26:06.852101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.290 [2024-11-08 02:26:06.852170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.290 [2024-11-08 02:26:06.852185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.290 [2024-11-08 02:26:06.856318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.290 [2024-11-08 02:26:06.856355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.290 [2024-11-08 02:26:06.856384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.290 [2024-11-08 02:26:06.860485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.290 [2024-11-08 02:26:06.860523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.290 [2024-11-08 02:26:06.860551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.290 [2024-11-08 02:26:06.865748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.290 [2024-11-08 02:26:06.866041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.290 [2024-11-08 02:26:06.866069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.290 [2024-11-08 02:26:06.872182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.290 [2024-11-08 02:26:06.872281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.290 [2024-11-08 02:26:06.872302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.290 [2024-11-08 02:26:06.876981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.290 [2024-11-08 02:26:06.877021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.290 [2024-11-08 02:26:06.877051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.290 [2024-11-08 02:26:06.881041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.290 [2024-11-08 02:26:06.881079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.290 [2024-11-08 02:26:06.881124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.290 [2024-11-08 02:26:06.885005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.290 [2024-11-08 02:26:06.885042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.290 [2024-11-08 02:26:06.885071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.290 [2024-11-08 02:26:06.888922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.888958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.888987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.892917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.892953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.892981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.896954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.896990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.897018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.900918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.900954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.900982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.904997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.905035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.905048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.908979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.909015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.909043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.912931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.912967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.912995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.916881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.916916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.916943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.920969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.921007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.921019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.924908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.924943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.924971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.928836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.928871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.928900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.932869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.932907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.932936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.936858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.936894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.936923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.941001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.941038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.941066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.945189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.945226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.945254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.949094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.949157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.949185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.953015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.953051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.953079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.957022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.957059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.957087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.961065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.961145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.961175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.965009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.965045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.965073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.968906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.968941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.968969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.972840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.972875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.972904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.976798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.976834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.976862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.980705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.980741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.980769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.984648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.984684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.984712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.988515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.988550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.988579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.992444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.291 [2024-11-08 02:26:06.992479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.291 [2024-11-08 02:26:06.992507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.291 [2024-11-08 02:26:06.996416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:06.996452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:06.996480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.000280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.000314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.000342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.004287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.004321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.004349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.008202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.008236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.008264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.012127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.012189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.012218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.016046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.016081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.016110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.020105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.020165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.020196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.024408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.024446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.024475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.028628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.028664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.028693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.033212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.033250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.033278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.037670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.037707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.037736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.042049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.042086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.042147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.046482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.046541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.046565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.050727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.050764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.050792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.054832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.054868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.054896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.059035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.059073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.059102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.063117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.063163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.063192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.067406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.067442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.067471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.071429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.071464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.071492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.075386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.075422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.075450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.079361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.079397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.079425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.083349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.083384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.083413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.087417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.087454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.087467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.091381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.091416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.091444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.095475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.095510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.095539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.099447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.099483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.099511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.103477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.103510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.292 [2024-11-08 02:26:07.103522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.292 [2024-11-08 02:26:07.107632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.292 [2024-11-08 02:26:07.107670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.293 [2024-11-08 02:26:07.107698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.293 [2024-11-08 02:26:07.111685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.293 [2024-11-08 02:26:07.111721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.293 [2024-11-08 02:26:07.111750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.293 [2024-11-08 02:26:07.115729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.293 [2024-11-08 02:26:07.115767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.293 [2024-11-08 02:26:07.115795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.293 [2024-11-08 02:26:07.119866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.293 [2024-11-08 02:26:07.119902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.293 [2024-11-08 02:26:07.119930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.293 [2024-11-08 02:26:07.124045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.293 [2024-11-08 02:26:07.124082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.293 [2024-11-08 02:26:07.124110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.293 [2024-11-08 02:26:07.128076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.293 [2024-11-08 02:26:07.128156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.293 [2024-11-08 02:26:07.128170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.293 [2024-11-08 02:26:07.132210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.293 [2024-11-08 02:26:07.132245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.293 [2024-11-08 02:26:07.132274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.293 [2024-11-08 02:26:07.136378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.293 [2024-11-08 02:26:07.136414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.293 [2024-11-08 02:26:07.136442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.293 [2024-11-08 02:26:07.140489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.293 [2024-11-08 02:26:07.140526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.293 [2024-11-08 02:26:07.140555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.293 [2024-11-08 02:26:07.145671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.293 [2024-11-08 02:26:07.145728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.293 [2024-11-08 02:26:07.145750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.293 [2024-11-08 02:26:07.151246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.293 [2024-11-08 02:26:07.151360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.293 [2024-11-08 02:26:07.151384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.562 [2024-11-08 02:26:07.156945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.562 [2024-11-08 02:26:07.157013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.562 [2024-11-08 02:26:07.157036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.562 [2024-11-08 02:26:07.161789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.562 [2024-11-08 02:26:07.161830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.562 [2024-11-08 02:26:07.161859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.562 [2024-11-08 02:26:07.166019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.562 [2024-11-08 02:26:07.166056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.562 [2024-11-08 02:26:07.166085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.562 [2024-11-08 02:26:07.169945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.562 [2024-11-08 02:26:07.170147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.562 [2024-11-08 02:26:07.170164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.562 [2024-11-08 02:26:07.174132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.562 [2024-11-08 02:26:07.174169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.562 [2024-11-08 02:26:07.174196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.562 [2024-11-08 02:26:07.178343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.562 [2024-11-08 02:26:07.178410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.562 [2024-11-08 02:26:07.178433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.562 [2024-11-08 02:26:07.184118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.562 [2024-11-08 02:26:07.184195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.562 [2024-11-08 02:26:07.184213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.562 [2024-11-08 02:26:07.189999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.562 [2024-11-08 02:26:07.190228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.562 [2024-11-08 02:26:07.190248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.562 [2024-11-08 02:26:07.194273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.562 [2024-11-08 02:26:07.194310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.562 [2024-11-08 02:26:07.194338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.562 [2024-11-08 02:26:07.198271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.562 [2024-11-08 02:26:07.198307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.562 [2024-11-08 02:26:07.198336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.562 [2024-11-08 02:26:07.202238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.562 [2024-11-08 02:26:07.202273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.562 [2024-11-08 02:26:07.202301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.562 [2024-11-08 02:26:07.206489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.562 [2024-11-08 02:26:07.206528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.562 [2024-11-08 02:26:07.206541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.210470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.210507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.210520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.214380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.214416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.214445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.218339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.218374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.218402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.222265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.222300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.222328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.226332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.226369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.226397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.230291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.230327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.230354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.234241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.234277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.234305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.238249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.238285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.238313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.242424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.242461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.242474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.246502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.246537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.246564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.250556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.250592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.250605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.254644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.254684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.254696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.258647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.258682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.258710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.262579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.262614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.262642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.266441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.266476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.266503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.270304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.270339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.270366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.274122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.274156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.274184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.277977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.278190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.278207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.282067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.282210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.282226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.286172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.286209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.286221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.290077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.290284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.290301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.294244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.294280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.294293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.298230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.298267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.298279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.302123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.302160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.302172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.306012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.306214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.306231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.310139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.310176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.310189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.314052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.563 [2024-11-08 02:26:07.314234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.563 [2024-11-08 02:26:07.314250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.563 [2024-11-08 02:26:07.318200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.318238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.318251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.322183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.322219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.322231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.326089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.326133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.326161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.329970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.330180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.330197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.334081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.334296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.334313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.338358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.338396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.338410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.342229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.342264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.342292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.346129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.346175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.346203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.350033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.350245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.350261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.354159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.354194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.354221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.358112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.358143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.358155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.362049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.362256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.362272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.366154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.366191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.366204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.370052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.370255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.370272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.374282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.374320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.374332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.378249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.378286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.378298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.382165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.382201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.382213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.386383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.386419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.386447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.390580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.390618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.390647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.395049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.395089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.395134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.399500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.399536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.399564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.404172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.404218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.404233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.408700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.408735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.408763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.413026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.413062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.413090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.417354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.417392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.417422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.421620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.421655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.421683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.425848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.564 [2024-11-08 02:26:07.425883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.564 [2024-11-08 02:26:07.425911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.564 [2024-11-08 02:26:07.430088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.565 [2024-11-08 02:26:07.430196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.565 [2024-11-08 02:26:07.430212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.565 [2024-11-08 02:26:07.434449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.565 [2024-11-08 02:26:07.434517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.565 [2024-11-08 02:26:07.434545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.565 [2024-11-08 02:26:07.438909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.565 [2024-11-08 02:26:07.438995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.565 [2024-11-08 02:26:07.439011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.825 [2024-11-08 02:26:07.443870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.825 [2024-11-08 02:26:07.443908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.825 [2024-11-08 02:26:07.443937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.825 [2024-11-08 02:26:07.448090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.825 [2024-11-08 02:26:07.448169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.825 [2024-11-08 02:26:07.448184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.825 [2024-11-08 02:26:07.452334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.825 [2024-11-08 02:26:07.452371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.825 [2024-11-08 02:26:07.452399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.825 [2024-11-08 02:26:07.456392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.825 [2024-11-08 02:26:07.456429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.825 [2024-11-08 02:26:07.456458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.825 [2024-11-08 02:26:07.460374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.825 [2024-11-08 02:26:07.460408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.825 [2024-11-08 02:26:07.460436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.825 [2024-11-08 02:26:07.464375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.825 [2024-11-08 02:26:07.464413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.825 [2024-11-08 02:26:07.464441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.825 [2024-11-08 02:26:07.468319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.825 [2024-11-08 02:26:07.468354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.825 [2024-11-08 02:26:07.468382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.825 [2024-11-08 02:26:07.472200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.825 [2024-11-08 02:26:07.472235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.472262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.476077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.476291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.476309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.480352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.480389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.480417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.484450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.484485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.484514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.488331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.488366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.488393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.492316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.492350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.492378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.496265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.496316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.496344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.500175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.500208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.500235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.504136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.504352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.504370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.508381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.508417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.508445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.512336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.512371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.512399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.516258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.516292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.516319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.520210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.520245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.520273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.524129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.524345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.524361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.528296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.528331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.528359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.532184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.532218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.532246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.536054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.536265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.536281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.540253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.540288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.540316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.544173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.544208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.544236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.548120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.548337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.548354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.552255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.552290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.552318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.556148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.556194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.556223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.560074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.560289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.560322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.564229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.564264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.564292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.568097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.568288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.568304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.572174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.572208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.572236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.576039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.576249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.826 [2024-11-08 02:26:07.576266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.826 [2024-11-08 02:26:07.580118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.826 [2024-11-08 02:26:07.580309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.580327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.584279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.584317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.584329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.588193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.588227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.588255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.592031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.592242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.592259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.596209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.596246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.596274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.600122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.600337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.600353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.604394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.604431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.604459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.608396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.608433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.608462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.612543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.612579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.612607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.616694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.616731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.616760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.620720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.620757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.620786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.624760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.624796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.624825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.628736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.628771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.628798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.632722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.632757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.632785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.636697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.636732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.636760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.641604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.641666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.641687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.645665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.645701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.645729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.649628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.649664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.649692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.653521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.653556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.653585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.657392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.657427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.657455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.661358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.661392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.661421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.665251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.665286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.665314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.669127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.669161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.669190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.673046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.673082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.673110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.827 7471.00 IOPS, 933.88 MiB/s [2024-11-08T02:26:07.711Z] [2024-11-08 02:26:07.678221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.678428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.678577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.681512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.681706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.681825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.684772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.684961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.827 [2024-11-08 02:26:07.685080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.827 [2024-11-08 02:26:07.688329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.827 [2024-11-08 02:26:07.688368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.828 [2024-11-08 02:26:07.688381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.828 [2024-11-08 02:26:07.690855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.828 [2024-11-08 02:26:07.690890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.828 [2024-11-08 02:26:07.690919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:05.828 [2024-11-08 02:26:07.694080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.828 [2024-11-08 02:26:07.694144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.828 [2024-11-08 02:26:07.694173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:05.828 [2024-11-08 02:26:07.697049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.828 [2024-11-08 02:26:07.697086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.828 [2024-11-08 02:26:07.697128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:05.828 [2024-11-08 02:26:07.700195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.828 [2024-11-08 02:26:07.700230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.828 [2024-11-08 02:26:07.700258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:05.828 [2024-11-08 02:26:07.703381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:05.828 [2024-11-08 02:26:07.703419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:05.828 [2024-11-08 02:26:07.703448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.706909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.706994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.707010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.709837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.709872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.709901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.713141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.713228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.713265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.716343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.716379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.716407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.719133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.719171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.719200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.722065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.722129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.722159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.725238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.725274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.725302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.727913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.728094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.728146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.731407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.731443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.731485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.734095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.734172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.734186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.737174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.737208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.737235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.740067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.740272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.740289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.743164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.743200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.743213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.746030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.746066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.746094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.748845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.748881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.748909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.751874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.752055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.752071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.754867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.754903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.754955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.757822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.757860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.757872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.760928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.089 [2024-11-08 02:26:07.760966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.089 [2024-11-08 02:26:07.760979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.089 [2024-11-08 02:26:07.763707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.763887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.763904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.767128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.767165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.767179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.769952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.769988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.770016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.772483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.772518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.772546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.775507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.775544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.775573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.778573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.778608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.778636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.781457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.781493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.781521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.784599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.784633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.784661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.787509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.787544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.787572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.790221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.790255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.790284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.793007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.793042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.793070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.796258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.796295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.796307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.799260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.799310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.799353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.801942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.801978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.802006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.804949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.804986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.805014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.807923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.808135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.808153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.811379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.811416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.811444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.814043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.814078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.814105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.817013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.817049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.817077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.819749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.819929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.819944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.822662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.822697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.822725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.825739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.825775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.825803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.828614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.828649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.828677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.832042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.832265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.832283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.834960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.835013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.835042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.838396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.838433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.838461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.841361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.090 [2024-11-08 02:26:07.841399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.090 [2024-11-08 02:26:07.841411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.090 [2024-11-08 02:26:07.844041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.844250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.844266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.847876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.848056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.848072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.850631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.850666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.850694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.853998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.854033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.854061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.856707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.856741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.856769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.860741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.860776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.860803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.864717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.864751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.864779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.868644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.868678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.868706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.872677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.872711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.872738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.876650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.876684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.876711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.880588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.880621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.880649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.884474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.884508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.884535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.888430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.888464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.888491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.892366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.892400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.892427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.896294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.896327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.896354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.900192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.900225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.900253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.904130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.904345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.904362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.908265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.908299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.908327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.912147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.912189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.912217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.916005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.916215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.916232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.920212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.920246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.920274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.924121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.924334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.924351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.928318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.928352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.928380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.932209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.932242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.932270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.936129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.936339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.936357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.940342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.940376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.940404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.944217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.091 [2024-11-08 02:26:07.944250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.091 [2024-11-08 02:26:07.944278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.091 [2024-11-08 02:26:07.948125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.092 [2024-11-08 02:26:07.948340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.092 [2024-11-08 02:26:07.948357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.092 [2024-11-08 02:26:07.952212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.092 [2024-11-08 02:26:07.952246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.092 [2024-11-08 02:26:07.952273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.092 [2024-11-08 02:26:07.956032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.092 [2024-11-08 02:26:07.956221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.092 [2024-11-08 02:26:07.956238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.092 [2024-11-08 02:26:07.960179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.092 [2024-11-08 02:26:07.960212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.092 [2024-11-08 02:26:07.960240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.092 [2024-11-08 02:26:07.964142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.092 [2024-11-08 02:26:07.964178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.092 [2024-11-08 02:26:07.964190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.092 [2024-11-08 02:26:07.968664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.092 [2024-11-08 02:26:07.968701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.092 [2024-11-08 02:26:07.968730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:07.972855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:07.972892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:07.972920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:07.977114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:07.977179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:07.977211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:07.981304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:07.981340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:07.981367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:07.985238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:07.985272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:07.985300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:07.989200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:07.989237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:07.989264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:07.993079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:07.993139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:07.993169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:07.996993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:07.997027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:07.997055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.000871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.000906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.000934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.004836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.004870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.004899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.008819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.008854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.008881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.012726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.012759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.012787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.016627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.016660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.016688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.020605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.020640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.020668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.024602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.024636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.024664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.028671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.028707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.028736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.032598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.032632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.032660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.036574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.036610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.036622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.040547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.040582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.040610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.044599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.044634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.044662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.048568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.048603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.048631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.052455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.052489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.052517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.056333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.056366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.056394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.060242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.060275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.060302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.064089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.064298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.353 [2024-11-08 02:26:08.064315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.353 [2024-11-08 02:26:08.068305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.353 [2024-11-08 02:26:08.068339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.068367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.072234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.072267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.072295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.076097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.076311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.076328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.080247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.080281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.080309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.084192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.084224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.084252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.088060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.088268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.088286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.092216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.092250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.092277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.096101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.096313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.096329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.100324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.100359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.100387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.104174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.104208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.104235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.108135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.108179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.108207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.111983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.112169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.112185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.116110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.116319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.116336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.120241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.120275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.120303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.124064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.124272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.124289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.128311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.128345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.128373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.132202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.132236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.132264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.136287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.136320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.136348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.140255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.140288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.140315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.144168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.144202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.144230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.148043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.148253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.148270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.152243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.152277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.152305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.156103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.156317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.156334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.160314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.160348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.160376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.164211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.164244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.164272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.168112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.168323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.168340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.172276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.354 [2024-11-08 02:26:08.172310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.354 [2024-11-08 02:26:08.172338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.354 [2024-11-08 02:26:08.176113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.176325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.176342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.180257] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.180292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.180320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.184130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.184341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.184358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.188229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.188263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.188290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.192073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.192260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.192277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.196102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.196312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.196328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.200317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.200352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.200382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.204223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.204256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.204286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.208054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.208262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.208278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.212141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.212349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.212366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.216234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.216268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.216296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.220133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.220175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.220204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.223975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.224182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.224198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.228079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.228285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.228303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.355 [2024-11-08 02:26:08.232599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.355 [2024-11-08 02:26:08.232636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.355 [2024-11-08 02:26:08.232663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.236931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.236969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.236982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.241213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.241249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.241278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.245305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.245357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.245386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.249763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.249800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.249828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.253961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.253997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.254025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.258136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.258184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.258198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.262747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.262784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.262813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.267185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.267241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.267270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.271701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.271737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.271765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.276028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.276064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.276091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.280328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.280364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.280392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.284575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.284611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.284639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.288715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.288750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.288778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.292930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.292964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.292992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.297072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.297132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.297146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.301037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.301073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.301100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.305079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.305155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.305169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.309068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.309146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.309160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.616 [2024-11-08 02:26:08.313347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.616 [2024-11-08 02:26:08.313383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.616 [2024-11-08 02:26:08.313412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.317316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.317351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.317379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.321406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.321442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.321471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.325391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.325425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.325453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.329344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.329379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.329407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.333458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.333494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.333523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.337496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.337532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.337561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.341564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.341601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.341629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.345732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.345769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.345798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.350163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.350221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.350251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.354320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.354355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.354368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.359027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.359085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.359101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.363919] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.364175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.364329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.368974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.369219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.369368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.373548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.373734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.373872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.377870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.378055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.378210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.382413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.382596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.382748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.386983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.387171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.387360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.391494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.391676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.391817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.395829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.396007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.396153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.400381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.400547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.400564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.404534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.404570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.404598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.408928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.408965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.409009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.413366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.413403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.413432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.417864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.417901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.417929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.422529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.422565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.422593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.427233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.427273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.427288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.617 [2024-11-08 02:26:08.431826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.617 [2024-11-08 02:26:08.431862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.617 [2024-11-08 02:26:08.431890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.436262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.436300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.436331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.440630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.440665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.440693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.444971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.445007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.445036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.449546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.449580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.449609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.453782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.453816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.453845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.458074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.458153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.458184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.462344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.462379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.462391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.466458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.466493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.466521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.470405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.470441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.470465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.474386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.474421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.474450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.478342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.478377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.478406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.482090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.482150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.482178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.485870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.486034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.486049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.489790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.489820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.489848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.618 [2024-11-08 02:26:08.493857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.618 [2024-11-08 02:26:08.494056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.618 [2024-11-08 02:26:08.494261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.878 [2024-11-08 02:26:08.498752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.878 [2024-11-08 02:26:08.498954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.878 [2024-11-08 02:26:08.499128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.878 [2024-11-08 02:26:08.503366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.878 [2024-11-08 02:26:08.503581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.878 [2024-11-08 02:26:08.503797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.507927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.508115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.508267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.512152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.512344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.512477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.516522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.516704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.516837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.520848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.521024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.521155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.525084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.525290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.525415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.529430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.529603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.529719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.533618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.533785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.533900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.537736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.537917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.538049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.542013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.542184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.542201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.546095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.546304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.546418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.550373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.550588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.550706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.554621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.554801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.554917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.558871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.559067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.559226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.563221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.563433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.563549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.567524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.567694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.567811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.571685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.571856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.571971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.575912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.576091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.576221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.580165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.580336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.580452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.584475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.584678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.584798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.589207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.589403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.589504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.593691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.593729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.593757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.597872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.597908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.597936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.602016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.602053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.602081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.606044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.879 [2024-11-08 02:26:08.606082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.879 [2024-11-08 02:26:08.606095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.879 [2024-11-08 02:26:08.610156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.610192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.610220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.614029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.614065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.614094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.618014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.618050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.618077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.621958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.621993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.622020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.625810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.625844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.625872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.629815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.629849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.629876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.633767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.633802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.633830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.637742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.637775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.637803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.641740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.641775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.641803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.645715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.645750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.645778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.649684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.649718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.649745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.653532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.653567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.653595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.657412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.657446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.657473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.661357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.661393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.661421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.665607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.665642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.665670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:06.880 [2024-11-08 02:26:08.669547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.669582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.669610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:06.880 7734.50 IOPS, 966.81 MiB/s [2024-11-08T02:26:08.764Z] [2024-11-08 02:26:08.674775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x746220) 00:22:06.880 [2024-11-08 02:26:08.674804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.880 [2024-11-08 02:26:08.674832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:06.880 00:22:06.880 Latency(us) 00:22:06.880 [2024-11-08T02:26:08.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.880 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:06.880 nvme0n1 : 2.00 7734.02 966.75 0.00 0.00 2065.48 848.99 7417.48 00:22:06.880 [2024-11-08T02:26:08.764Z] =================================================================================================================== 00:22:06.880 [2024-11-08T02:26:08.764Z] Total : 7734.02 966.75 0.00 0.00 2065.48 848.99 7417.48 00:22:06.880 { 00:22:06.880 "results": [ 00:22:06.880 { 00:22:06.880 "job": "nvme0n1", 00:22:06.880 "core_mask": "0x2", 00:22:06.880 "workload": "randread", 00:22:06.880 "status": "finished", 00:22:06.880 "queue_depth": 16, 00:22:06.880 "io_size": 131072, 00:22:06.880 "runtime": 2.002194, 00:22:06.880 "iops": 7734.015784684201, 00:22:06.880 "mibps": 966.7519730855252, 00:22:06.880 "io_failed": 0, 00:22:06.880 "io_timeout": 0, 00:22:06.880 "avg_latency_us": 2065.4832963278245, 00:22:06.880 "min_latency_us": 848.9890909090909, 00:22:06.880 "max_latency_us": 7417.483636363636 00:22:06.880 } 00:22:06.880 ], 00:22:06.880 "core_count": 1 00:22:06.880 } 00:22:06.880 02:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:06.880 02:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:06.880 02:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:06.880 02:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:06.880 | .driver_specific 00:22:06.880 | .nvme_error 00:22:06.880 | .status_code 00:22:06.880 | .command_transient_transport_error' 00:22:07.139 02:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 499 > 0 )) 00:22:07.139 02:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95612 00:22:07.139 02:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95612 ']' 00:22:07.139 02:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95612 00:22:07.139 02:26:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:22:07.139 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:07.139 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95612 00:22:07.398 killing process with pid 95612 00:22:07.398 Received shutdown signal, test time was about 2.000000 seconds 00:22:07.398 00:22:07.398 Latency(us) 00:22:07.398 [2024-11-08T02:26:09.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.398 [2024-11-08T02:26:09.282Z] =================================================================================================================== 00:22:07.398 [2024-11-08T02:26:09.282Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95612' 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95612 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95612 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95669 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95669 /var/tmp/bperf.sock 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95669 ']' 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:07.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:07.398 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:07.398 [2024-11-08 02:26:09.213464] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:07.398 [2024-11-08 02:26:09.213702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95669 ] 00:22:07.657 [2024-11-08 02:26:09.345620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.657 [2024-11-08 02:26:09.378692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.657 [2024-11-08 02:26:09.406572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:07.657 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.657 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:22:07.657 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:07.657 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:07.915 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:07.915 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.915 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:07.915 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.915 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:07.915 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:08.173 nvme0n1 00:22:08.173 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:08.173 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.173 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:08.173 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.173 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:08.173 02:26:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:08.431 Running I/O for 2 seconds... 00:22:08.431 [2024-11-08 02:26:10.104665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fef90 00:22:08.432 [2024-11-08 02:26:10.107032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.107077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.118641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198feb58 00:22:08.432 [2024-11-08 02:26:10.120902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.120935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.132193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fe2e8 00:22:08.432 [2024-11-08 02:26:10.134409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.134441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.145998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fda78 00:22:08.432 [2024-11-08 02:26:10.148318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.148350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.159669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fd208 00:22:08.432 [2024-11-08 02:26:10.162160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.162215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.173496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fc998 00:22:08.432 [2024-11-08 02:26:10.175735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.175922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.187204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fc128 00:22:08.432 [2024-11-08 02:26:10.189327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.189361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.200554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fb8b8 00:22:08.432 [2024-11-08 02:26:10.202733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.202764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.214046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fb048 00:22:08.432 [2024-11-08 02:26:10.216276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.216308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.227750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fa7d8 00:22:08.432 [2024-11-08 02:26:10.229908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.229935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.241207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f9f68 00:22:08.432 [2024-11-08 02:26:10.243373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.243555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.254760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f96f8 00:22:08.432 [2024-11-08 02:26:10.256884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.256916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.268792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f8e88 00:22:08.432 [2024-11-08 02:26:10.270892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.270924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.282258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f8618 00:22:08.432 [2024-11-08 02:26:10.284311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.284343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.295647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f7da8 00:22:08.432 [2024-11-08 02:26:10.297946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.297977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:08.432 [2024-11-08 02:26:10.309465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f7538 00:22:08.432 [2024-11-08 02:26:10.311828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.432 [2024-11-08 02:26:10.312045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.324354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f6cc8 00:22:08.691 [2024-11-08 02:26:10.326351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.326385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.338082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f6458 00:22:08.691 [2024-11-08 02:26:10.340434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.340466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.351815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f5be8 00:22:08.691 [2024-11-08 02:26:10.353834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.353865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.365204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f5378 00:22:08.691 [2024-11-08 02:26:10.367121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.367183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.378488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f4b08 00:22:08.691 [2024-11-08 02:26:10.380460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.380492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.392092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f4298 00:22:08.691 [2024-11-08 02:26:10.394266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.394297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.405925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f3a28 00:22:08.691 [2024-11-08 02:26:10.407968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.407998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.419577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f31b8 00:22:08.691 [2024-11-08 02:26:10.421637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.421670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.433452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f2948 00:22:08.691 [2024-11-08 02:26:10.435397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.435429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.447063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f20d8 00:22:08.691 [2024-11-08 02:26:10.449007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.449034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.461625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f1868 00:22:08.691 [2024-11-08 02:26:10.464088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.464312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.478677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f0ff8 00:22:08.691 [2024-11-08 02:26:10.480770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.480802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.494002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f0788 00:22:08.691 [2024-11-08 02:26:10.496043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.496286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.508225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198eff18 00:22:08.691 [2024-11-08 02:26:10.509925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.509958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.522030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ef6a8 00:22:08.691 [2024-11-08 02:26:10.523949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.524136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.535853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198eee38 00:22:08.691 [2024-11-08 02:26:10.537694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.537722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.549519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ee5c8 00:22:08.691 [2024-11-08 02:26:10.551297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.551509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:08.691 [2024-11-08 02:26:10.563383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198edd58 00:22:08.691 [2024-11-08 02:26:10.565079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.691 [2024-11-08 02:26:10.565137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.577907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ed4e8 00:22:08.951 [2024-11-08 02:26:10.579902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.579938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.591808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ecc78 00:22:08.951 [2024-11-08 02:26:10.593566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.593595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.605383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ec408 00:22:08.951 [2024-11-08 02:26:10.607041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.607077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.618717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ebb98 00:22:08.951 [2024-11-08 02:26:10.620641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.620673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.632360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198eb328 00:22:08.951 [2024-11-08 02:26:10.633934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.633966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.645806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198eaab8 00:22:08.951 [2024-11-08 02:26:10.647471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.647671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.659806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ea248 00:22:08.951 [2024-11-08 02:26:10.661489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.661517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.673493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e99d8 00:22:08.951 [2024-11-08 02:26:10.675122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.675187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.686900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e9168 00:22:08.951 [2024-11-08 02:26:10.688572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.688618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.701592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e88f8 00:22:08.951 [2024-11-08 02:26:10.703272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.703321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.717782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e8088 00:22:08.951 [2024-11-08 02:26:10.719580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.719764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.732882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e7818 00:22:08.951 [2024-11-08 02:26:10.734524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.734559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.747302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e6fa8 00:22:08.951 [2024-11-08 02:26:10.749072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.749128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.761732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e6738 00:22:08.951 [2024-11-08 02:26:10.763460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.763487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.775964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e5ec8 00:22:08.951 [2024-11-08 02:26:10.777627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.777660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.790384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e5658 00:22:08.951 [2024-11-08 02:26:10.791944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.792127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.805302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e4de8 00:22:08.951 [2024-11-08 02:26:10.806771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.806805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:08.951 [2024-11-08 02:26:10.819848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e4578 00:22:08.951 [2024-11-08 02:26:10.821452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:08.951 [2024-11-08 02:26:10.821480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:09.210 [2024-11-08 02:26:10.834920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e3d08 00:22:09.210 [2024-11-08 02:26:10.836726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.210 [2024-11-08 02:26:10.836814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:09.210 [2024-11-08 02:26:10.849596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e3498 00:22:09.210 [2024-11-08 02:26:10.851374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.210 [2024-11-08 02:26:10.851572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:09.210 [2024-11-08 02:26:10.864591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e2c28 00:22:09.210 [2024-11-08 02:26:10.865960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.210 [2024-11-08 02:26:10.865993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:09.210 [2024-11-08 02:26:10.879047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e23b8 00:22:09.210 [2024-11-08 02:26:10.880646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.210 [2024-11-08 02:26:10.880681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:09.210 [2024-11-08 02:26:10.893851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e1b48 00:22:09.211 [2024-11-08 02:26:10.895330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:10.895527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:09.211 [2024-11-08 02:26:10.908511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e12d8 00:22:09.211 [2024-11-08 02:26:10.909852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:10.909884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:09.211 [2024-11-08 02:26:10.922264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e0a68 00:22:09.211 [2024-11-08 02:26:10.923630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:10.923808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:09.211 [2024-11-08 02:26:10.935944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e01f8 00:22:09.211 [2024-11-08 02:26:10.937370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:10.937398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:09.211 [2024-11-08 02:26:10.949683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198df988 00:22:09.211 [2024-11-08 02:26:10.950987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:10.951023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:09.211 [2024-11-08 02:26:10.963220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198df118 00:22:09.211 [2024-11-08 02:26:10.964516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:10.964547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:09.211 [2024-11-08 02:26:10.976497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198de8a8 00:22:09.211 [2024-11-08 02:26:10.977688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:10.977735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:09.211 [2024-11-08 02:26:10.989756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198de038 00:22:09.211 [2024-11-08 02:26:10.991156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:10.991185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:09.211 [2024-11-08 02:26:11.008686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198de038 00:22:09.211 [2024-11-08 02:26:11.010890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:11.010921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.211 [2024-11-08 02:26:11.022364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198de8a8 00:22:09.211 [2024-11-08 02:26:11.024652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:11.024683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:09.211 [2024-11-08 02:26:11.035849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198df118 00:22:09.211 [2024-11-08 02:26:11.038341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:11.038371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:09.211 [2024-11-08 02:26:11.049753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198df988 00:22:09.211 [2024-11-08 02:26:11.051987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:11.052017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:09.211 [2024-11-08 02:26:11.063478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e01f8 00:22:09.211 [2024-11-08 02:26:11.065694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:11.065728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:09.211 [2024-11-08 02:26:11.076844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e0a68 00:22:09.211 [2024-11-08 02:26:11.079053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.211 [2024-11-08 02:26:11.079088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:09.470 17965.00 IOPS, 70.18 MiB/s [2024-11-08T02:26:11.354Z] [2024-11-08 02:26:11.092434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e12d8 00:22:09.470 [2024-11-08 02:26:11.094780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.094812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.106546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e1b48 00:22:09.470 [2024-11-08 02:26:11.108821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.108856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.120321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e23b8 00:22:09.470 [2024-11-08 02:26:11.122412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.122444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.133775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e2c28 00:22:09.470 [2024-11-08 02:26:11.136014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.136224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.147759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e3498 00:22:09.470 [2024-11-08 02:26:11.149932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.150156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.161787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e3d08 00:22:09.470 [2024-11-08 02:26:11.164027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.164259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.175653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e4578 00:22:09.470 [2024-11-08 02:26:11.177806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.178004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.189627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e4de8 00:22:09.470 [2024-11-08 02:26:11.191862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.192058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.203810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e5658 00:22:09.470 [2024-11-08 02:26:11.206024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.206235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.218064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e5ec8 00:22:09.470 [2024-11-08 02:26:11.220260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.220460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.232597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e6738 00:22:09.470 [2024-11-08 02:26:11.234694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.234889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.246777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e6fa8 00:22:09.470 [2024-11-08 02:26:11.248929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.249133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.260937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e7818 00:22:09.470 [2024-11-08 02:26:11.263081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.263323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.274843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e8088 00:22:09.470 [2024-11-08 02:26:11.277093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.277148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.288769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e88f8 00:22:09.470 [2024-11-08 02:26:11.290706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.290740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.302322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e9168 00:22:09.470 [2024-11-08 02:26:11.304318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.304352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.316010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198e99d8 00:22:09.470 [2024-11-08 02:26:11.317922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.470 [2024-11-08 02:26:11.317954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:09.470 [2024-11-08 02:26:11.329696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ea248 00:22:09.470 [2024-11-08 02:26:11.331672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.471 [2024-11-08 02:26:11.331850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:09.471 [2024-11-08 02:26:11.343588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198eaab8 00:22:09.471 [2024-11-08 02:26:11.345673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.471 [2024-11-08 02:26:11.345699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:09.729 [2024-11-08 02:26:11.358342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198eb328 00:22:09.729 [2024-11-08 02:26:11.360277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.729 [2024-11-08 02:26:11.360315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:09.729 [2024-11-08 02:26:11.371891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ebb98 00:22:09.729 [2024-11-08 02:26:11.373919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.729 [2024-11-08 02:26:11.373951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:09.729 [2024-11-08 02:26:11.385576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ec408 00:22:09.729 [2024-11-08 02:26:11.387576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.387608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.399175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ecc78 00:22:09.730 [2024-11-08 02:26:11.400972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.401004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.412961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ed4e8 00:22:09.730 [2024-11-08 02:26:11.414840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.414871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.426507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198edd58 00:22:09.730 [2024-11-08 02:26:11.428300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.428333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.440087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ee5c8 00:22:09.730 [2024-11-08 02:26:11.442001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.442029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.453692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198eee38 00:22:09.730 [2024-11-08 02:26:11.455519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.455552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.467435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198ef6a8 00:22:09.730 [2024-11-08 02:26:11.469376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.469407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.481531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198eff18 00:22:09.730 [2024-11-08 02:26:11.483360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.483393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.497486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f0788 00:22:09.730 [2024-11-08 02:26:11.499443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.499590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.513454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f0ff8 00:22:09.730 [2024-11-08 02:26:11.515358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.515396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.528073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f1868 00:22:09.730 [2024-11-08 02:26:11.530094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.530179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.541918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f20d8 00:22:09.730 [2024-11-08 02:26:11.543745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.543910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.555609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f2948 00:22:09.730 [2024-11-08 02:26:11.557321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.557349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.569153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f31b8 00:22:09.730 [2024-11-08 02:26:11.570686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.570717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.582400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f3a28 00:22:09.730 [2024-11-08 02:26:11.583996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.584202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.596018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f4298 00:22:09.730 [2024-11-08 02:26:11.597759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.730 [2024-11-08 02:26:11.597793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:09.730 [2024-11-08 02:26:11.610065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f4b08 00:22:09.989 [2024-11-08 02:26:11.611861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.989 [2024-11-08 02:26:11.612047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:09.989 [2024-11-08 02:26:11.624619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f5378 00:22:09.989 [2024-11-08 02:26:11.626094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.989 [2024-11-08 02:26:11.626155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:09.989 [2024-11-08 02:26:11.638476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f5be8 00:22:09.989 [2024-11-08 02:26:11.640027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.989 [2024-11-08 02:26:11.640060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:09.989 [2024-11-08 02:26:11.652406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f6458 00:22:09.989 [2024-11-08 02:26:11.653847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.989 [2024-11-08 02:26:11.653878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:09.989 [2024-11-08 02:26:11.665793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f6cc8 00:22:09.989 [2024-11-08 02:26:11.667490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.989 [2024-11-08 02:26:11.667651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.989 [2024-11-08 02:26:11.679612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f7538 00:22:09.989 [2024-11-08 02:26:11.681167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.989 [2024-11-08 02:26:11.681364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:09.989 [2024-11-08 02:26:11.693395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f7da8 00:22:09.989 [2024-11-08 02:26:11.694932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.989 [2024-11-08 02:26:11.695199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:09.989 [2024-11-08 02:26:11.707709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f8618 00:22:09.989 [2024-11-08 02:26:11.709236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.989 [2024-11-08 02:26:11.709430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:09.989 [2024-11-08 02:26:11.721732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f8e88 00:22:09.989 [2024-11-08 02:26:11.723390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.989 [2024-11-08 02:26:11.723601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:09.989 [2024-11-08 02:26:11.735990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f96f8 00:22:09.989 [2024-11-08 02:26:11.737535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.989 [2024-11-08 02:26:11.737734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:09.989 [2024-11-08 02:26:11.749749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f9f68 00:22:09.989 [2024-11-08 02:26:11.751313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.990 [2024-11-08 02:26:11.751510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:09.990 [2024-11-08 02:26:11.764311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fa7d8 00:22:09.990 [2024-11-08 02:26:11.765686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.990 [2024-11-08 02:26:11.765721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:09.990 [2024-11-08 02:26:11.778171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fb048 00:22:09.990 [2024-11-08 02:26:11.779569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.990 [2024-11-08 02:26:11.779601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:09.990 [2024-11-08 02:26:11.791644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fb8b8 00:22:09.990 [2024-11-08 02:26:11.792935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.990 [2024-11-08 02:26:11.792966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:09.990 [2024-11-08 02:26:11.805218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fc128 00:22:09.990 [2024-11-08 02:26:11.806505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.990 [2024-11-08 02:26:11.806537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:09.990 [2024-11-08 02:26:11.818412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fc998 00:22:09.990 [2024-11-08 02:26:11.819757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.990 [2024-11-08 02:26:11.819789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:09.990 [2024-11-08 02:26:11.832054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fd208 00:22:09.990 [2024-11-08 02:26:11.833525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.990 [2024-11-08 02:26:11.833552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:09.990 [2024-11-08 02:26:11.845748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fda78 00:22:09.990 [2024-11-08 02:26:11.846988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.990 [2024-11-08 02:26:11.847023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:09.990 [2024-11-08 02:26:11.859398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fe2e8 00:22:09.990 [2024-11-08 02:26:11.860917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:09.990 [2024-11-08 02:26:11.860948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:10.249 [2024-11-08 02:26:11.873938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198feb58 00:22:10.249 [2024-11-08 02:26:11.875515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.249 [2024-11-08 02:26:11.875547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:10.249 [2024-11-08 02:26:11.893326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fef90 00:22:10.249 [2024-11-08 02:26:11.895658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.249 [2024-11-08 02:26:11.895836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:10.249 [2024-11-08 02:26:11.907667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198feb58 00:22:10.249 [2024-11-08 02:26:11.910346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.249 [2024-11-08 02:26:11.910382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:10.249 [2024-11-08 02:26:11.923541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fe2e8 00:22:10.249 [2024-11-08 02:26:11.926050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.249 [2024-11-08 02:26:11.926084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:10.249 [2024-11-08 02:26:11.938889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fda78 00:22:10.249 [2024-11-08 02:26:11.941383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.249 [2024-11-08 02:26:11.941419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:10.249 [2024-11-08 02:26:11.953678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fd208 00:22:10.249 [2024-11-08 02:26:11.956167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.249 [2024-11-08 02:26:11.956212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:10.249 [2024-11-08 02:26:11.967996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fc998 00:22:10.249 [2024-11-08 02:26:11.970345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.249 [2024-11-08 02:26:11.970377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:10.249 [2024-11-08 02:26:11.982308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fc128 00:22:10.249 [2024-11-08 02:26:11.984564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.249 [2024-11-08 02:26:11.984596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:10.249 [2024-11-08 02:26:11.996616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fb8b8 00:22:10.249 [2024-11-08 02:26:11.998867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.249 [2024-11-08 02:26:11.998900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:10.249 [2024-11-08 02:26:12.010868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fb048 00:22:10.249 [2024-11-08 02:26:12.013161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.249 [2024-11-08 02:26:12.013193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:10.249 [2024-11-08 02:26:12.025048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198fa7d8 00:22:10.249 [2024-11-08 02:26:12.027421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.250 [2024-11-08 02:26:12.027607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:10.250 [2024-11-08 02:26:12.039739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f9f68 00:22:10.250 [2024-11-08 02:26:12.042000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.250 [2024-11-08 02:26:12.042029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:10.250 [2024-11-08 02:26:12.054035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f96f8 00:22:10.250 [2024-11-08 02:26:12.056317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.250 [2024-11-08 02:26:12.056350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:10.250 [2024-11-08 02:26:12.068333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f8e88 00:22:10.250 [2024-11-08 02:26:12.070381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.250 [2024-11-08 02:26:12.070414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:10.250 [2024-11-08 02:26:12.082223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54430) with pdu=0x2000198f8618 00:22:10.250 [2024-11-08 02:26:12.084340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:10.250 [2024-11-08 02:26:12.084372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:10.250 18027.50 IOPS, 70.42 MiB/s 00:22:10.250 Latency(us) 00:22:10.250 [2024-11-08T02:26:12.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.250 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:10.250 nvme0n1 : 2.01 18017.77 70.38 0.00 0.00 7098.62 3589.59 26929.34 00:22:10.250 [2024-11-08T02:26:12.134Z] =================================================================================================================== 00:22:10.250 [2024-11-08T02:26:12.134Z] Total : 18017.77 70.38 0.00 0.00 7098.62 3589.59 26929.34 00:22:10.250 { 00:22:10.250 "results": [ 00:22:10.250 { 00:22:10.250 "job": "nvme0n1", 00:22:10.250 "core_mask": "0x2", 00:22:10.250 "workload": "randwrite", 00:22:10.250 "status": "finished", 00:22:10.250 "queue_depth": 128, 00:22:10.250 "io_size": 4096, 00:22:10.250 "runtime": 2.008184, 00:22:10.250 "iops": 18017.771279922556, 00:22:10.250 "mibps": 70.38191906219748, 00:22:10.250 "io_failed": 0, 00:22:10.250 "io_timeout": 0, 00:22:10.250 "avg_latency_us": 7098.620451492791, 00:22:10.250 "min_latency_us": 3589.5854545454545, 00:22:10.250 "max_latency_us": 26929.33818181818 00:22:10.250 } 00:22:10.250 ], 00:22:10.250 "core_count": 1 00:22:10.250 } 00:22:10.250 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:10.250 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:10.250 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:10.250 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:10.250 | .driver_specific 00:22:10.250 | .nvme_error 00:22:10.250 | .status_code 00:22:10.250 | .command_transient_transport_error' 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95669 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95669 ']' 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95669 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95669 00:22:10.818 killing process with pid 95669 00:22:10.818 Received shutdown signal, test time was about 2.000000 seconds 00:22:10.818 00:22:10.818 Latency(us) 00:22:10.818 [2024-11-08T02:26:12.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.818 [2024-11-08T02:26:12.702Z] =================================================================================================================== 00:22:10.818 [2024-11-08T02:26:12.702Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95669' 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95669 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95669 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95722 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95722 /var/tmp/bperf.sock 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95722 ']' 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:10.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.818 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:10.818 [2024-11-08 02:26:12.621734] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:10.818 [2024-11-08 02:26:12.621970] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95722 ] 00:22:10.818 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:10.818 Zero copy mechanism will not be used. 00:22:11.077 [2024-11-08 02:26:12.755233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.077 [2024-11-08 02:26:12.789109] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.077 [2024-11-08 02:26:12.817307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:11.077 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:11.077 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:22:11.077 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:11.077 02:26:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:11.337 02:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:11.337 02:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.337 02:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:11.337 02:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.337 02:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:11.337 02:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:11.596 nvme0n1 00:22:11.596 02:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:11.596 02:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.596 02:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:11.596 02:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.596 02:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:11.596 02:26:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:11.855 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:11.855 Zero copy mechanism will not be used. 00:22:11.855 Running I/O for 2 seconds... 00:22:11.855 [2024-11-08 02:26:13.541730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.855 [2024-11-08 02:26:13.542042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-11-08 02:26:13.542072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.855 [2024-11-08 02:26:13.547304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.855 [2024-11-08 02:26:13.547690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-11-08 02:26:13.547731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.855 [2024-11-08 02:26:13.552613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.855 [2024-11-08 02:26:13.552881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-11-08 02:26:13.552909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.855 [2024-11-08 02:26:13.557655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.855 [2024-11-08 02:26:13.557941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-11-08 02:26:13.557983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.855 [2024-11-08 02:26:13.562592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.855 [2024-11-08 02:26:13.562859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-11-08 02:26:13.562886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.855 [2024-11-08 02:26:13.567518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.855 [2024-11-08 02:26:13.567788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-11-08 02:26:13.567815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.855 [2024-11-08 02:26:13.572225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.855 [2024-11-08 02:26:13.572491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-11-08 02:26:13.572517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.855 [2024-11-08 02:26:13.576834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.855 [2024-11-08 02:26:13.577101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-11-08 02:26:13.577153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.855 [2024-11-08 02:26:13.581502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.855 [2024-11-08 02:26:13.581784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-11-08 02:26:13.581810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.855 [2024-11-08 02:26:13.586232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.855 [2024-11-08 02:26:13.586488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-11-08 02:26:13.586545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.855 [2024-11-08 02:26:13.590869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.855 [2024-11-08 02:26:13.591206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.855 [2024-11-08 02:26:13.591233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.595493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.595762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.595788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.600300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.600570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.600597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.604899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.605211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.605238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.609598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.609869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.609896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.614163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.614432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.614458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.618725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.619222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.619246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.623554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.623825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.623852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.628300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.628553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.628579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.632956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.633252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.633282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.637548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.637815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.637841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.642219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.642486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.642511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.646984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.647425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.647470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.651790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.652062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.652088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.656438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.656706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.656740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.661099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.661422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.661462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.665863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.666116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.666152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.670565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.671011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.671034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.675495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.675750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.675775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.680185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.680453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.680478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.684778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.685046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.685073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.689408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.689694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.689719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.694040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.694342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.694370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.698765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.699080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.699116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.703428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.703698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.703725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.708083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.708406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.708471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.712675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.713089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.713128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:11.856 [2024-11-08 02:26:13.717419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.856 [2024-11-08 02:26:13.717702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.856 [2024-11-08 02:26:13.717727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:11.857 [2024-11-08 02:26:13.722077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.857 [2024-11-08 02:26:13.722398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.857 [2024-11-08 02:26:13.722429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:11.857 [2024-11-08 02:26:13.726812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.857 [2024-11-08 02:26:13.727116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.857 [2024-11-08 02:26:13.727183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:11.857 [2024-11-08 02:26:13.731557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:11.857 [2024-11-08 02:26:13.731810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.857 [2024-11-08 02:26:13.731867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.736580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.736891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.736920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.741639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.741952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.741981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.746434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.746704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.746730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.751055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.751387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.751413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.755795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.756238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.756261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.760621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.760888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.760915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.765320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.765604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.765630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.770021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.770318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.770345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.774590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.774856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.774882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.779424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.779692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.779718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.784059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.784351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.784376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.788613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.788882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.788909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.793271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.793556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.793582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.797808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.798074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.798111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.802581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.803000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.803023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.807633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.807904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.807931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.812238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.812504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.812530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.816867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.817161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.817189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.116 [2024-11-08 02:26:13.821433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.116 [2024-11-08 02:26:13.821717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.116 [2024-11-08 02:26:13.821743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.826109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.826389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.826415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.830641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.830906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.830955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.835447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.835713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.835739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.840190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.840470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.840495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.844835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.845300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.845322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.849727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.850140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.850417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.854865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.855401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.855557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.860089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.860550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.860740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.865245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.865685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.865835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.870355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.870790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.870989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.875514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.875936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.876089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.880676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.881099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.881348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.885799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.886226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.886410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.890862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.891376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.891544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.895890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.896335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.896359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.900622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.900892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.900919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.905197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.905478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.905519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.909901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.910187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.910227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.914457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.914722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.914749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.919094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.919426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.919451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.923823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.924085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.924119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.928534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.928821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.928847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.933043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.933339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.933380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.937776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.938048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.938075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.942672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.942985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.943013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.947511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.947796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.947824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.117 [2024-11-08 02:26:13.952297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.117 [2024-11-08 02:26:13.952566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.117 [2024-11-08 02:26:13.952592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.118 [2024-11-08 02:26:13.957053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.118 [2024-11-08 02:26:13.957359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-11-08 02:26:13.957385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.118 [2024-11-08 02:26:13.961948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.118 [2024-11-08 02:26:13.962234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-11-08 02:26:13.962260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.118 [2024-11-08 02:26:13.966626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.118 [2024-11-08 02:26:13.966893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-11-08 02:26:13.966919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.118 [2024-11-08 02:26:13.971482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.118 [2024-11-08 02:26:13.971761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-11-08 02:26:13.971789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.118 [2024-11-08 02:26:13.976161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.118 [2024-11-08 02:26:13.976441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-11-08 02:26:13.976482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.118 [2024-11-08 02:26:13.980912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.118 [2024-11-08 02:26:13.981207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-11-08 02:26:13.981229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.118 [2024-11-08 02:26:13.985586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.118 [2024-11-08 02:26:13.985881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-11-08 02:26:13.985901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.118 [2024-11-08 02:26:13.990207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.118 [2024-11-08 02:26:13.990469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-11-08 02:26:13.990494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.118 [2024-11-08 02:26:13.994997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.118 [2024-11-08 02:26:13.995365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.118 [2024-11-08 02:26:13.995417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.000224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.000473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.000500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.004840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.004913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.004936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.009363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.009435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.009456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.013857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.013931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.013952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.018428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.018505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.018543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.022862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.022962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.022984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.027436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.027507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.027528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.031881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.031957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.031978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.036385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.036456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.036477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.041130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.041215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.041237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.045774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.045849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.045878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.050479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.050569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.050590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.054934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.055030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.055052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.059435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.059507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.059528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.063930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.064003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.064024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.068504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.068576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.068597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.072956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.073031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.073052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.378 [2024-11-08 02:26:14.077396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.378 [2024-11-08 02:26:14.077466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.378 [2024-11-08 02:26:14.077487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.081854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.081929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.081949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.086360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.086437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.086458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.090820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.090895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.090915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.095520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.095594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.095615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.099965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.100040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.100060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.104496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.104570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.104590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.108994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.109065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.109086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.113553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.113627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.113647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.118061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.118163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.118197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.122800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.122873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.122894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.127565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.127640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.127660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.132164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.132236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.132256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.136627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.136701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.136722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.141310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.141384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.141405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.145804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.145879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.145899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.150405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.150482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.150518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.154788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.154861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.154881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.159395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.159468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.159489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.163883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.163957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.163978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.168436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.168510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.168531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.172934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.173007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.173028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.177380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.177454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.177475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.181812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.181887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.181908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.186277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.186351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.186372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.190771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.190845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.190866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.195364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.195435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.195455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.379 [2024-11-08 02:26:14.199853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.379 [2024-11-08 02:26:14.199926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.379 [2024-11-08 02:26:14.199947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.380 [2024-11-08 02:26:14.204343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.380 [2024-11-08 02:26:14.204417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-11-08 02:26:14.204438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.380 [2024-11-08 02:26:14.208796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.380 [2024-11-08 02:26:14.208871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-11-08 02:26:14.208891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.380 [2024-11-08 02:26:14.213307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.380 [2024-11-08 02:26:14.213383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-11-08 02:26:14.213403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.380 [2024-11-08 02:26:14.217706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.380 [2024-11-08 02:26:14.217780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-11-08 02:26:14.217800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.380 [2024-11-08 02:26:14.222239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.380 [2024-11-08 02:26:14.222307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-11-08 02:26:14.222329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.380 [2024-11-08 02:26:14.226739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.380 [2024-11-08 02:26:14.226814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-11-08 02:26:14.226835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.380 [2024-11-08 02:26:14.231481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.380 [2024-11-08 02:26:14.231543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-11-08 02:26:14.231563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.380 [2024-11-08 02:26:14.236032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.380 [2024-11-08 02:26:14.236105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-11-08 02:26:14.236126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.380 [2024-11-08 02:26:14.240508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.380 [2024-11-08 02:26:14.240583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-11-08 02:26:14.240604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.380 [2024-11-08 02:26:14.245150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.380 [2024-11-08 02:26:14.245210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-11-08 02:26:14.245230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.380 [2024-11-08 02:26:14.249715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.380 [2024-11-08 02:26:14.249786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-11-08 02:26:14.249807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.380 [2024-11-08 02:26:14.254222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.380 [2024-11-08 02:26:14.254296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.380 [2024-11-08 02:26:14.254317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.259049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.259152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.259176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.263784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.263878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.263923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.268897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.268978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.269001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.273486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.273557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.273578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.277937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.278010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.278031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.282434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.282523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.282543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.287005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.287071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.287093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.291643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.291715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.291736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.296134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.296217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.296237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.300566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.300648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.300668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.305082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.305166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.305186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.309433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.309506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.309526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.313916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.313990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.314011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.318324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.318398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.318420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.322794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.322866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.322887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.327527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.327598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.327619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.332158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.332245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.332283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.337011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.337087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.337124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.341891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.341965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.341986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.346824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.346898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.640 [2024-11-08 02:26:14.346921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.640 [2024-11-08 02:26:14.351939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.640 [2024-11-08 02:26:14.352017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.352038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.357092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.357261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.357283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.362130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.362233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.362255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.367096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.367187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.367210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.371888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.371970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.371992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.376658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.376740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.376761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.381639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.381710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.381731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.386320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.386395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.386416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.390915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.391021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.391043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.395596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.395671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.395692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.400461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.400535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.400557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.405176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.405262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.405284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.409851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.409926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.409947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.414462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.414534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.414555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.419091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.419175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.419198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.423889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.423959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.423981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.428602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.428676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.428696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.433358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.433433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.433455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.437919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.437994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.438015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.442825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.442900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.442921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.447591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.447665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.447685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.452295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.452370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.452390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.456881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.456962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.456982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.461812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.461897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.461918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.466397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.466471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.466492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.471051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.471128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.471164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.475791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.475866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.475886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.641 [2024-11-08 02:26:14.480601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.641 [2024-11-08 02:26:14.480674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.641 [2024-11-08 02:26:14.480696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.642 [2024-11-08 02:26:14.485277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.642 [2024-11-08 02:26:14.485354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-11-08 02:26:14.485375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.642 [2024-11-08 02:26:14.489866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.642 [2024-11-08 02:26:14.489950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-11-08 02:26:14.489971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.642 [2024-11-08 02:26:14.494469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.642 [2024-11-08 02:26:14.494545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-11-08 02:26:14.494566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.642 [2024-11-08 02:26:14.499194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.642 [2024-11-08 02:26:14.499272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-11-08 02:26:14.499307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.642 [2024-11-08 02:26:14.503858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.642 [2024-11-08 02:26:14.503920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-11-08 02:26:14.503941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.642 [2024-11-08 02:26:14.508503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.642 [2024-11-08 02:26:14.508578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-11-08 02:26:14.508598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.642 [2024-11-08 02:26:14.513250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.642 [2024-11-08 02:26:14.513327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-11-08 02:26:14.513348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.642 [2024-11-08 02:26:14.518087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.642 [2024-11-08 02:26:14.518179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.642 [2024-11-08 02:26:14.518202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.523242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.523351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.523389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.528119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.528207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.528243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.532923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.533001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.533022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.902 6589.00 IOPS, 823.62 MiB/s [2024-11-08T02:26:14.786Z] [2024-11-08 02:26:14.538862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.538965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.538989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.543573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.543654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.543676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.548238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.548310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.548331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.553418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.553533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.553554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.558486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.558590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.558612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.563665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.563745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.563767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.569011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.569087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.569124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.574556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.574632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.574653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.579570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.579651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.579671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.584480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.584568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.584589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.589218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.589301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.902 [2024-11-08 02:26:14.589322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.902 [2024-11-08 02:26:14.593941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.902 [2024-11-08 02:26:14.594014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.594034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.598736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.598815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.598836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.603530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.603603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.603624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.608052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.608135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.608156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.612512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.612570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.612591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.617048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.617133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.617154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.621413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.621493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.621513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.625864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.625949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.625984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.630389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.630448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.630468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.634881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.635001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.635023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.639546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.639621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.639641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.644101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.644193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.644214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.648523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.648604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.648623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.653013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.653096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.653128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.657319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.657401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.657421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.661728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.661811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.661831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.666158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.666238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.666259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.670655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.670736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.670757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.675145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.675215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.675237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.679782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.679856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.679877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.684265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.684350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.684371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.688675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.688751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.688771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.693155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.693232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.693252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.697553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.697635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.697655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.701978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.702060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.702081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.706413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.706493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.706513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.710952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.711059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.903 [2024-11-08 02:26:14.711082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.903 [2024-11-08 02:26:14.715632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.903 [2024-11-08 02:26:14.715712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.715732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.720021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.720102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.720123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.724431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.724503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.724523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.728904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.728978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.728999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.733370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.733454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.733475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.737789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.737870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.737891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.742295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.742375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.742396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.746898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.747033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.747056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.751508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.751591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.751611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.756103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.756196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.756229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.760593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.760667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.760688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.765144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.765224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.765244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.769547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.769624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.769644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.774086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.774170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.774190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:12.904 [2024-11-08 02:26:14.778706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:12.904 [2024-11-08 02:26:14.778784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.904 [2024-11-08 02:26:14.778810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.164 [2024-11-08 02:26:14.783684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.164 [2024-11-08 02:26:14.783759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.164 [2024-11-08 02:26:14.783783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.164 [2024-11-08 02:26:14.788382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.164 [2024-11-08 02:26:14.788465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.164 [2024-11-08 02:26:14.788487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.164 [2024-11-08 02:26:14.793035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.164 [2024-11-08 02:26:14.793112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.164 [2024-11-08 02:26:14.793146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.164 [2024-11-08 02:26:14.797471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.164 [2024-11-08 02:26:14.797545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.164 [2024-11-08 02:26:14.797566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.164 [2024-11-08 02:26:14.801919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.164 [2024-11-08 02:26:14.801996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.164 [2024-11-08 02:26:14.802017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.164 [2024-11-08 02:26:14.806405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.164 [2024-11-08 02:26:14.806614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.164 [2024-11-08 02:26:14.806636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.164 [2024-11-08 02:26:14.810956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.164 [2024-11-08 02:26:14.811059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.164 [2024-11-08 02:26:14.811082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.164 [2024-11-08 02:26:14.815485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.164 [2024-11-08 02:26:14.815563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.164 [2024-11-08 02:26:14.815584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.164 [2024-11-08 02:26:14.819945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.164 [2024-11-08 02:26:14.820026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.164 [2024-11-08 02:26:14.820047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.164 [2024-11-08 02:26:14.824493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.164 [2024-11-08 02:26:14.824568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.164 [2024-11-08 02:26:14.824588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.829071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.829159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.829180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.833540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.833619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.833640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.837985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.838066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.838086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.842529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.842607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.842628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.847105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.847228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.847250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.851593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.851674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.851694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.856058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.856156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.856177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.860497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.860579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.860599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.864990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.865071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.865091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.869510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.869590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.869610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.874068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.874170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.874192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.878494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.878586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.878607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.883016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.883086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.883108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.887676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.887754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.887774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.892170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.892254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.892275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.896559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.896636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.896657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.901047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.901138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.901158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.905502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.905583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.905603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.909935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.910015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.910036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.914407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.914489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.914524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.918991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.919055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.919077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.923639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.923707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.923727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.928296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.928363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.928383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.932685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.932765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.932785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.937211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.937276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.937297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.941625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.941707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.941728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.165 [2024-11-08 02:26:14.946066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.165 [2024-11-08 02:26:14.946162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.165 [2024-11-08 02:26:14.946184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:14.950611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:14.950683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:14.950704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:14.955239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:14.955372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:14.955393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:14.959773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:14.959844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:14.959865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:14.964256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:14.964340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:14.964360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:14.968709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:14.968804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:14.968825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:14.973133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:14.973209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:14.973230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:14.977470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:14.977550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:14.977571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:14.981894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:14.981971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:14.981991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:14.986320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:14.986400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:14.986420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:14.990670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:14.990749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:14.990770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:14.995131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:14.995212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:14.995234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:14.999519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:14.999597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:14.999617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:15.003929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:15.004010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:15.004030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:15.008438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:15.008516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:15.008536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:15.012829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:15.012912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:15.012933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:15.017310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:15.017387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:15.017407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:15.021672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:15.021750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:15.021770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:15.026152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:15.026227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:15.026248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:15.030504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:15.030584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:15.030605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:15.034902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:15.035023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:15.035045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:15.039354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:15.039432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:15.039453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.166 [2024-11-08 02:26:15.044143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.166 [2024-11-08 02:26:15.044272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.166 [2024-11-08 02:26:15.044294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.426 [2024-11-08 02:26:15.048904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.426 [2024-11-08 02:26:15.048991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.426 [2024-11-08 02:26:15.049013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.426 [2024-11-08 02:26:15.053651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.426 [2024-11-08 02:26:15.053730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.426 [2024-11-08 02:26:15.053751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.426 [2024-11-08 02:26:15.058166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.426 [2024-11-08 02:26:15.058252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.426 [2024-11-08 02:26:15.058273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.426 [2024-11-08 02:26:15.062615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.426 [2024-11-08 02:26:15.062695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.426 [2024-11-08 02:26:15.062715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.426 [2024-11-08 02:26:15.067058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.426 [2024-11-08 02:26:15.067170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.426 [2024-11-08 02:26:15.067193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.426 [2024-11-08 02:26:15.071693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.426 [2024-11-08 02:26:15.071773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.426 [2024-11-08 02:26:15.071793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.426 [2024-11-08 02:26:15.076157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.426 [2024-11-08 02:26:15.076261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.426 [2024-11-08 02:26:15.076298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.426 [2024-11-08 02:26:15.080554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.426 [2024-11-08 02:26:15.080637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.080658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.085067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.085140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.085161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.089571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.089655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.089676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.094092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.094177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.094198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.098680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.098759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.098780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.103399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.103483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.103504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.107857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.107938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.107958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.112436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.112533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.112553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.116939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.117020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.117040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.121404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.121481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.121502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.125894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.125974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.125995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.130384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.130466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.130487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.134808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.134884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.134905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.139533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.139614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.139634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.143959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.144040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.144061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.148597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.148669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.148690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.153082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.153180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.153200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.157480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.157556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.157577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.161990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.162088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.162110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.166737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.166822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.166842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.171332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.171420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.171441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.175805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.175882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.175903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.180278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.180357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.180379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.184730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.184811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.184832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.189219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.189298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.189319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.193636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.427 [2024-11-08 02:26:15.193720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.427 [2024-11-08 02:26:15.193740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.427 [2024-11-08 02:26:15.198050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.198143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.198163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.202465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.202548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.202568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.206978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.207052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.207073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.211438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.211518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.211539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.215895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.215980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.216001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.220440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.220527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.220547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.224890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.224971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.224992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.229461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.229541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.229561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.233906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.233980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.234000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.238405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.238479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.238500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.242958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.243041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.243063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.247541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.247621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.247641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.251973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.252054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.252075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.256557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.256628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.256649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.261036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.261116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.261151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.265536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.265609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.265629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.270037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.270122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.270154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.274417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.274490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.274510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.278885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.279001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.279022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.283428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.283502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.283521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.287832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.287914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.287935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.292318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.292396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.292417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.428 [2024-11-08 02:26:15.296754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.428 [2024-11-08 02:26:15.296833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.428 [2024-11-08 02:26:15.296854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.429 [2024-11-08 02:26:15.301305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.429 [2024-11-08 02:26:15.301383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.429 [2024-11-08 02:26:15.301404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.429 [2024-11-08 02:26:15.305940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.429 [2024-11-08 02:26:15.306011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.429 [2024-11-08 02:26:15.306034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.688 [2024-11-08 02:26:15.310754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.688 [2024-11-08 02:26:15.310840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.688 [2024-11-08 02:26:15.310861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.688 [2024-11-08 02:26:15.315708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.688 [2024-11-08 02:26:15.315788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.688 [2024-11-08 02:26:15.315810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.688 [2024-11-08 02:26:15.320241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.688 [2024-11-08 02:26:15.320325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.688 [2024-11-08 02:26:15.320346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.688 [2024-11-08 02:26:15.324718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.688 [2024-11-08 02:26:15.324798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.688 [2024-11-08 02:26:15.324819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.329274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.329354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.329375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.333718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.333789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.333810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.338217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.338289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.338311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.342611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.342693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.342714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.347211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.347287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.347308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.351677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.351763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.351783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.356270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.356345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.356366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.360698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.360779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.360799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.365194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.365273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.365294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.369800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.369884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.369904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.374266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.374347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.374368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.378706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.378779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.378800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.383198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.383299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.383334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.387862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.387935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.387955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.392363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.392438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.392460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.396832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.396906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.396927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.401256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.401338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.401358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.405666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.405746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.405766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.410137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.410219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.410240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.414541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.414627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.414647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.419056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.419149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.419170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.423620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.423682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.423703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.428189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.428264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.428285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.432626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.432697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.432717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.437042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.437116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.437150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.441521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.441599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.441636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.689 [2024-11-08 02:26:15.446016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.689 [2024-11-08 02:26:15.446098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.689 [2024-11-08 02:26:15.446129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.450480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.450562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.450582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.454961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.455031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.455052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.459573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.459654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.459674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.464286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.464362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.464383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.468676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.468762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.468782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.473383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.473461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.473481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.477969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.478047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.478069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.482579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.482657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.482677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.487245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.487368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.487388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.491871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.491949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.491970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.496547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.496628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.496648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.501214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.501283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.501303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.505704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.505775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.505795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.510151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.510233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.510254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.514777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.514851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.514872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.519426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.519521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.519542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.523856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.523935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.523955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.528456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.528530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.528551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.690 [2024-11-08 02:26:15.532891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.532974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.532995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.690 6705.00 IOPS, 838.12 MiB/s [2024-11-08T02:26:15.574Z] [2024-11-08 02:26:15.538659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c54770) with pdu=0x2000198fef90 00:22:13.690 [2024-11-08 02:26:15.538729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:13.690 [2024-11-08 02:26:15.538750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.690 00:22:13.690 Latency(us) 00:22:13.690 [2024-11-08T02:26:15.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.690 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:13.690 nvme0n1 : 2.00 6701.21 837.65 0.00 0.00 2382.18 1541.59 5838.66 00:22:13.690 [2024-11-08T02:26:15.574Z] =================================================================================================================== 00:22:13.690 [2024-11-08T02:26:15.574Z] Total : 6701.21 837.65 0.00 0.00 2382.18 1541.59 5838.66 00:22:13.690 { 00:22:13.690 "results": [ 00:22:13.690 { 00:22:13.690 "job": "nvme0n1", 00:22:13.690 "core_mask": "0x2", 00:22:13.690 "workload": "randwrite", 00:22:13.690 "status": "finished", 00:22:13.690 "queue_depth": 16, 00:22:13.690 "io_size": 131072, 00:22:13.690 "runtime": 2.00352, 00:22:13.690 "iops": 6701.205877655327, 00:22:13.690 "mibps": 837.6507347069158, 00:22:13.690 "io_failed": 0, 00:22:13.690 "io_timeout": 0, 00:22:13.690 "avg_latency_us": 2382.1818030145037, 00:22:13.690 "min_latency_us": 1541.5854545454545, 00:22:13.690 "max_latency_us": 5838.6618181818185 00:22:13.690 } 00:22:13.690 ], 00:22:13.690 "core_count": 1 00:22:13.690 } 00:22:13.690 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:13.690 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:13.690 | .driver_specific 00:22:13.690 | .nvme_error 00:22:13.690 | .status_code 00:22:13.690 | .command_transient_transport_error' 00:22:13.690 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:13.690 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:14.258 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 433 > 0 )) 00:22:14.258 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95722 00:22:14.258 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95722 ']' 00:22:14.258 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95722 00:22:14.258 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:22:14.258 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.258 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95722 00:22:14.258 killing process with pid 95722 00:22:14.258 Received shutdown signal, test time was about 2.000000 seconds 00:22:14.258 00:22:14.258 Latency(us) 00:22:14.258 [2024-11-08T02:26:16.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.258 [2024-11-08T02:26:16.142Z] =================================================================================================================== 00:22:14.258 [2024-11-08T02:26:16.142Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:14.258 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:14.258 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:14.258 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95722' 00:22:14.258 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95722 00:22:14.258 02:26:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95722 00:22:14.258 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 95520 00:22:14.258 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95520 ']' 00:22:14.258 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95520 00:22:14.258 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:22:14.258 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:14.258 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95520 00:22:14.258 killing process with pid 95520 00:22:14.258 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:14.258 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:14.258 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95520' 00:22:14.258 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95520 00:22:14.258 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95520 00:22:14.517 00:22:14.517 real 0m16.478s 00:22:14.517 user 0m31.848s 00:22:14.517 sys 0m4.296s 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:14.517 ************************************ 00:22:14.517 END TEST nvmf_digest_error 00:22:14.517 ************************************ 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.517 rmmod nvme_tcp 00:22:14.517 rmmod nvme_fabrics 00:22:14.517 rmmod nvme_keyring 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 95520 ']' 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 95520 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 95520 ']' 00:22:14.517 Process with pid 95520 is not found 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 95520 00:22:14.517 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (95520) - No such process 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 95520 is not found' 00:22:14.517 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:14.518 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:22:14.777 ************************************ 00:22:14.777 END TEST nvmf_digest 00:22:14.777 ************************************ 00:22:14.777 00:22:14.777 real 0m32.204s 00:22:14.777 user 1m0.602s 00:22:14.777 sys 0m9.064s 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.777 ************************************ 00:22:14.777 START TEST nvmf_host_multipath 00:22:14.777 ************************************ 00:22:14.777 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:15.037 * Looking for test storage... 00:22:15.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:15.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.037 --rc genhtml_branch_coverage=1 00:22:15.037 --rc genhtml_function_coverage=1 00:22:15.037 --rc genhtml_legend=1 00:22:15.037 --rc geninfo_all_blocks=1 00:22:15.037 --rc geninfo_unexecuted_blocks=1 00:22:15.037 00:22:15.037 ' 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:15.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.037 --rc genhtml_branch_coverage=1 00:22:15.037 --rc genhtml_function_coverage=1 00:22:15.037 --rc genhtml_legend=1 00:22:15.037 --rc geninfo_all_blocks=1 00:22:15.037 --rc geninfo_unexecuted_blocks=1 00:22:15.037 00:22:15.037 ' 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:15.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.037 --rc genhtml_branch_coverage=1 00:22:15.037 --rc genhtml_function_coverage=1 00:22:15.037 --rc genhtml_legend=1 00:22:15.037 --rc geninfo_all_blocks=1 00:22:15.037 --rc geninfo_unexecuted_blocks=1 00:22:15.037 00:22:15.037 ' 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:15.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:15.037 --rc genhtml_branch_coverage=1 00:22:15.037 --rc genhtml_function_coverage=1 00:22:15.037 --rc genhtml_legend=1 00:22:15.037 --rc geninfo_all_blocks=1 00:22:15.037 --rc geninfo_unexecuted_blocks=1 00:22:15.037 00:22:15.037 ' 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:15.037 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:15.038 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:15.038 Cannot find device "nvmf_init_br" 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:15.038 Cannot find device "nvmf_init_br2" 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:15.038 Cannot find device "nvmf_tgt_br" 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:15.038 Cannot find device "nvmf_tgt_br2" 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:15.038 Cannot find device "nvmf_init_br" 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:22:15.038 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:15.297 Cannot find device "nvmf_init_br2" 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:15.297 Cannot find device "nvmf_tgt_br" 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:15.297 Cannot find device "nvmf_tgt_br2" 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:15.297 Cannot find device "nvmf_br" 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:15.297 Cannot find device "nvmf_init_if" 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:15.297 Cannot find device "nvmf_init_if2" 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:15.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:15.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:15.297 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:15.298 02:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:15.298 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:15.557 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:15.557 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:22:15.557 00:22:15.557 --- 10.0.0.3 ping statistics --- 00:22:15.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.557 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:15.557 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:15.557 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:22:15.557 00:22:15.557 --- 10.0.0.4 ping statistics --- 00:22:15.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.557 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:15.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:22:15.557 00:22:15.557 --- 10.0.0.1 ping statistics --- 00:22:15.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.557 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:15.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:22:15.557 00:22:15.557 --- 10.0.0.2 ping statistics --- 00:22:15.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.557 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # return 0 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # nvmfpid=96029 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # waitforlisten 96029 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 96029 ']' 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:15.557 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:15.557 [2024-11-08 02:26:17.314591] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:15.557 [2024-11-08 02:26:17.314688] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.816 [2024-11-08 02:26:17.456292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:15.816 [2024-11-08 02:26:17.500690] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.816 [2024-11-08 02:26:17.500996] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.816 [2024-11-08 02:26:17.501213] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.816 [2024-11-08 02:26:17.501511] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.816 [2024-11-08 02:26:17.501739] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.816 [2024-11-08 02:26:17.501998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.816 [2024-11-08 02:26:17.502010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.816 [2024-11-08 02:26:17.538932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:15.816 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:15.816 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:22:15.816 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:15.816 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:15.816 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:15.816 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.816 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=96029 00:22:15.816 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:16.074 [2024-11-08 02:26:17.923326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.075 02:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:16.641 Malloc0 00:22:16.641 02:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:16.641 02:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:16.900 02:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:17.159 [2024-11-08 02:26:18.912892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:17.159 02:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:17.417 [2024-11-08 02:26:19.184995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:17.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.417 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96072 00:22:17.417 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:17.417 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:17.417 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96072 /var/tmp/bdevperf.sock 00:22:17.417 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 96072 ']' 00:22:17.417 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.417 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:17.417 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.417 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:17.417 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:17.676 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.676 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:22:17.676 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:17.940 02:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:18.200 Nvme0n1 00:22:18.200 02:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:18.766 Nvme0n1 00:22:18.766 02:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:22:18.766 02:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:19.701 02:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:22:19.701 02:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:19.960 02:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:20.219 02:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:22:20.219 02:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96114 00:22:20.219 02:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96029 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:20.219 02:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:26.802 02:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:26.802 02:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:26.802 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:26.802 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:26.802 Attaching 4 probes... 00:22:26.802 @path[10.0.0.3, 4421]: 16120 00:22:26.802 @path[10.0.0.3, 4421]: 16384 00:22:26.802 @path[10.0.0.3, 4421]: 16481 00:22:26.802 @path[10.0.0.3, 4421]: 16454 00:22:26.802 @path[10.0.0.3, 4421]: 16326 00:22:26.802 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:26.802 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:26.802 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:26.802 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:26.802 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:26.802 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:26.802 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96114 00:22:26.802 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:26.802 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:26.803 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:26.803 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:27.061 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:27.061 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96224 00:22:27.061 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96029 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:27.061 02:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:33.624 02:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:33.624 02:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:33.624 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:33.624 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:33.624 Attaching 4 probes... 00:22:33.624 @path[10.0.0.3, 4420]: 20315 00:22:33.624 @path[10.0.0.3, 4420]: 20653 00:22:33.624 @path[10.0.0.3, 4420]: 20521 00:22:33.624 @path[10.0.0.3, 4420]: 20499 00:22:33.624 @path[10.0.0.3, 4420]: 20625 00:22:33.624 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:33.624 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:33.624 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:33.624 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:33.624 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:33.624 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:33.624 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96224 00:22:33.624 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:33.624 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:33.624 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:33.624 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:33.884 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:33.884 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96029 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:33.884 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96342 00:22:33.884 02:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:40.446 Attaching 4 probes... 00:22:40.446 @path[10.0.0.3, 4421]: 14696 00:22:40.446 @path[10.0.0.3, 4421]: 20091 00:22:40.446 @path[10.0.0.3, 4421]: 20107 00:22:40.446 @path[10.0.0.3, 4421]: 19985 00:22:40.446 @path[10.0.0.3, 4421]: 20382 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96342 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:40.446 02:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:40.446 02:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:40.720 02:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:40.720 02:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96461 00:22:40.720 02:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:40.720 02:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96029 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:47.343 Attaching 4 probes... 00:22:47.343 00:22:47.343 00:22:47.343 00:22:47.343 00:22:47.343 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96461 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:47.343 02:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:47.602 02:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:47.602 02:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96570 00:22:47.602 02:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96029 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:47.602 02:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:54.173 Attaching 4 probes... 00:22:54.173 @path[10.0.0.3, 4421]: 19563 00:22:54.173 @path[10.0.0.3, 4421]: 19785 00:22:54.173 @path[10.0.0.3, 4421]: 19797 00:22:54.173 @path[10.0.0.3, 4421]: 19801 00:22:54.173 @path[10.0.0.3, 4421]: 19936 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96570 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:54.173 02:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:55.107 02:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:55.107 02:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96695 00:22:55.107 02:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:55.107 02:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96029 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:01.669 02:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:01.669 02:27:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:01.669 02:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:01.669 02:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:01.669 Attaching 4 probes... 00:23:01.669 @path[10.0.0.3, 4420]: 19429 00:23:01.669 @path[10.0.0.3, 4420]: 19881 00:23:01.669 @path[10.0.0.3, 4420]: 19627 00:23:01.669 @path[10.0.0.3, 4420]: 19182 00:23:01.669 @path[10.0.0.3, 4420]: 18952 00:23:01.669 02:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:01.669 02:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:01.669 02:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:01.669 02:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:01.669 02:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:01.669 02:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:01.669 02:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96695 00:23:01.669 02:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:01.669 02:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:01.669 [2024-11-08 02:27:03.373431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:01.669 02:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:01.927 02:27:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:23:08.489 02:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:23:08.489 02:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96868 00:23:08.489 02:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96029 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:08.489 02:27:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:15.060 Attaching 4 probes... 00:23:15.060 @path[10.0.0.3, 4421]: 19649 00:23:15.060 @path[10.0.0.3, 4421]: 20017 00:23:15.060 @path[10.0.0.3, 4421]: 19991 00:23:15.060 @path[10.0.0.3, 4421]: 20050 00:23:15.060 @path[10.0.0.3, 4421]: 20005 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96868 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96072 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 96072 ']' 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 96072 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96072 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:15.060 killing process with pid 96072 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96072' 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 96072 00:23:15.060 02:27:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 96072 00:23:15.060 { 00:23:15.060 "results": [ 00:23:15.060 { 00:23:15.060 "job": "Nvme0n1", 00:23:15.060 "core_mask": "0x4", 00:23:15.060 "workload": "verify", 00:23:15.060 "status": "terminated", 00:23:15.060 "verify_range": { 00:23:15.060 "start": 0, 00:23:15.060 "length": 16384 00:23:15.060 }, 00:23:15.060 "queue_depth": 128, 00:23:15.060 "io_size": 4096, 00:23:15.060 "runtime": 55.502623, 00:23:15.060 "iops": 8231.106483021533, 00:23:15.060 "mibps": 32.15275969930286, 00:23:15.060 "io_failed": 0, 00:23:15.060 "io_timeout": 0, 00:23:15.060 "avg_latency_us": 15521.140282536782, 00:23:15.060 "min_latency_us": 437.5272727272727, 00:23:15.060 "max_latency_us": 7015926.69090909 00:23:15.060 } 00:23:15.060 ], 00:23:15.060 "core_count": 1 00:23:15.060 } 00:23:15.060 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96072 00:23:15.060 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:15.060 [2024-11-08 02:26:19.249719] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:15.060 [2024-11-08 02:26:19.249799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96072 ] 00:23:15.060 [2024-11-08 02:26:19.384713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.060 [2024-11-08 02:26:19.419663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.060 [2024-11-08 02:26:19.448958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:15.060 [2024-11-08 02:26:20.350183] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:23:15.060 Running I/O for 90 seconds... 00:23:15.060 7829.00 IOPS, 30.58 MiB/s [2024-11-08T02:27:16.944Z] 7926.00 IOPS, 30.96 MiB/s [2024-11-08T02:27:16.944Z] 8021.67 IOPS, 31.33 MiB/s [2024-11-08T02:27:16.944Z] 8064.25 IOPS, 31.50 MiB/s [2024-11-08T02:27:16.944Z] 8111.20 IOPS, 31.68 MiB/s [2024-11-08T02:27:16.944Z] 8124.67 IOPS, 31.74 MiB/s [2024-11-08T02:27:16.944Z] 8134.29 IOPS, 31.77 MiB/s [2024-11-08T02:27:16.944Z] 8125.50 IOPS, 31.74 MiB/s [2024-11-08T02:27:16.944Z] [2024-11-08 02:26:28.742595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.060 [2024-11-08 02:26:28.742653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:15.060 [2024-11-08 02:26:28.742718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.060 [2024-11-08 02:26:28.742737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:15.060 [2024-11-08 02:26:28.742758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.060 [2024-11-08 02:26:28.742772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:15.060 [2024-11-08 02:26:28.742791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.060 [2024-11-08 02:26:28.742804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.742822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.742834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.742853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.742866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.742884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.742897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.742915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.742955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.743971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.743990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.744002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.744287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.744740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.744754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.746401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.746443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.746477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.746510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.746543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.746575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.746606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.746638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.746670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.746703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.746735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.746767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.746799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.746839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.746871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.746904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.746948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.746982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.061 [2024-11-08 02:26:28.747050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.061 [2024-11-08 02:26:28.747565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:15.061 [2024-11-08 02:26:28.747584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.747597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.747616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.747629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.747648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.747662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.747680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.747693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.747712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.747731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.747751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.747765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.747785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.747798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.747817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.747830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.747849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.747862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.747880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.747894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.747913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.747926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.747945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.747958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.747976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.747989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.748963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.748977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.749003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.749017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.749036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.749050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.749069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.749082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.749101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.749125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.749148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.749162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.749181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.749195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.749213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.749227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.749246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.749260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:28.749279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:28.749293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:15.062 8250.67 IOPS, 32.23 MiB/s [2024-11-08T02:27:16.946Z] 8448.80 IOPS, 33.00 MiB/s [2024-11-08T02:27:16.946Z] 8619.91 IOPS, 33.67 MiB/s [2024-11-08T02:27:16.946Z] 8762.42 IOPS, 34.23 MiB/s [2024-11-08T02:27:16.946Z] 8879.00 IOPS, 34.68 MiB/s [2024-11-08T02:27:16.946Z] 8979.43 IOPS, 35.08 MiB/s [2024-11-08T02:27:16.946Z] [2024-11-08 02:26:35.305818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.062 [2024-11-08 02:26:35.305871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.305939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.062 [2024-11-08 02:26:35.305959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.305980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.062 [2024-11-08 02:26:35.306015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.062 [2024-11-08 02:26:35.306048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.062 [2024-11-08 02:26:35.306080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.062 [2024-11-08 02:26:35.306110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.062 [2024-11-08 02:26:35.306161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.062 [2024-11-08 02:26:35.306192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.062 [2024-11-08 02:26:35.306644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:15.062 [2024-11-08 02:26:35.306662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.306675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.306694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.306707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.306979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.307001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.307038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.307071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.307116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.307165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.307199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.307233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.307281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.307815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.307871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.307906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.307939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.307958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.307979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.308579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.308611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.308642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.308673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.308705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.308736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.308767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.063 [2024-11-08 02:26:35.308806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.308985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.308998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.309016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.309029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.309048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.309061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.309079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.309092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.309138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.309153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.309173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.309186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.309213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.309228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.309247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.063 [2024-11-08 02:26:35.309260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.063 [2024-11-08 02:26:35.309279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.309293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.309312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.309325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.309344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.309357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.309377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:35.309391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.309410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:35.309423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.309442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:35.309457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.309476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:35.309489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.309523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:35.309536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.309554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:35.309568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.309586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:35.309599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:35.310322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.310976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.310993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.311023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.311045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.311071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.311085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.311111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.311125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.311166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.311182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:35.311208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:35.311223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:15.064 8931.73 IOPS, 34.89 MiB/s [2024-11-08T02:27:16.948Z] 8481.75 IOPS, 33.13 MiB/s [2024-11-08T02:27:16.948Z] 8578.12 IOPS, 33.51 MiB/s [2024-11-08T02:27:16.948Z] 8662.17 IOPS, 33.84 MiB/s [2024-11-08T02:27:16.948Z] 8733.68 IOPS, 34.12 MiB/s [2024-11-08T02:27:16.948Z] 8798.90 IOPS, 34.37 MiB/s [2024-11-08T02:27:16.948Z] 8858.86 IOPS, 34.60 MiB/s [2024-11-08T02:27:16.948Z] [2024-11-08 02:26:42.390179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.390973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.390987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:42.391054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:42.391087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:42.391133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:42.391169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:42.391202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:42.391235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:42.391268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.064 [2024-11-08 02:26:42.391324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.064 [2024-11-08 02:26:42.391815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:15.064 [2024-11-08 02:26:42.391834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.391847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.391866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.391879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.391897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.391911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.391929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.391942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.391961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.391975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.391993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.392006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.392038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.392070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.392103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.392154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.392187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:117696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.392825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.392858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.392891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.392923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.392956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.392975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.392994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.393029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.393062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.393095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:117872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.065 [2024-11-08 02:26:42.393764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.393796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.393829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.393870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.393902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.393936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.393968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.393987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.394002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.394021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.394034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.394053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.394066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.394086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.394109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.394132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.394145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.394165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.065 [2024-11-08 02:26:42.394178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:15.065 [2024-11-08 02:26:42.394198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:42.394212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.394234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:42.394248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.394275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:42.394290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.394900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:42.394954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.395023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:42.395041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.395071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:42.395087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.395116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:42.395131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.395177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:42.395197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.395227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:42.395242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.395271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:42.395286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.395330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:42.395360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.395401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:42.395420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.395446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:42.395461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.395486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:42.395501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.395537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:117960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:42.395553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.395578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:117968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:42.395592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:42.395618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:42.395632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:15.066 8856.55 IOPS, 34.60 MiB/s [2024-11-08T02:27:16.950Z] 8471.48 IOPS, 33.09 MiB/s [2024-11-08T02:27:16.950Z] 8118.50 IOPS, 31.71 MiB/s [2024-11-08T02:27:16.950Z] 7793.76 IOPS, 30.44 MiB/s [2024-11-08T02:27:16.950Z] 7494.00 IOPS, 29.27 MiB/s [2024-11-08T02:27:16.950Z] 7216.44 IOPS, 28.19 MiB/s [2024-11-08T02:27:16.950Z] 6958.71 IOPS, 27.18 MiB/s [2024-11-08T02:27:16.950Z] 6744.79 IOPS, 26.35 MiB/s [2024-11-08T02:27:16.950Z] 6843.63 IOPS, 26.73 MiB/s [2024-11-08T02:27:16.950Z] 6942.16 IOPS, 27.12 MiB/s [2024-11-08T02:27:16.950Z] 7034.22 IOPS, 27.48 MiB/s [2024-11-08T02:27:16.950Z] 7121.36 IOPS, 27.82 MiB/s [2024-11-08T02:27:16.950Z] 7206.09 IOPS, 28.15 MiB/s [2024-11-08T02:27:16.950Z] 7285.89 IOPS, 28.46 MiB/s [2024-11-08T02:27:16.950Z] [2024-11-08 02:26:55.772237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.772293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.772385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.772419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.772451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.772483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.772515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.772547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.772578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.772657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.772690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.772723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.772756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.772804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.772837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.772870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.772904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.772953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.772975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.772988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.773608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.773637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.773663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.773689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.773715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.773757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.773784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.066 [2024-11-08 02:26:55.773810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.773975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.773989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.774001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.774016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.774028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.774042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.774054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.774069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.774082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.774096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.774108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.774122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.774134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.774160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.774176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.774190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.774203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.066 [2024-11-08 02:26:55.774217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.066 [2024-11-08 02:26:55.774229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.774281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.774313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.774341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.774367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.774393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.774419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.774446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.774472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.774902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.774957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.774972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.774985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.067 [2024-11-08 02:26:55.775452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:000 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:15.067 0 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.067 [2024-11-08 02:26:55.775871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdec8f0 is same with the state(6) to be set 00:23:15.067 [2024-11-08 02:26:55.775910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.067 [2024-11-08 02:26:55.775920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.067 [2024-11-08 02:26:55.775934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95464 len:8 PRP1 0x0 PRP2 0x0 00:23:15.067 [2024-11-08 02:26:55.775946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.775959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.067 [2024-11-08 02:26:55.775968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.067 [2024-11-08 02:26:55.775978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95472 len:8 PRP1 0x0 PRP2 0x0 00:23:15.067 [2024-11-08 02:26:55.775992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.776004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.067 [2024-11-08 02:26:55.776013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.067 [2024-11-08 02:26:55.776022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95800 len:8 PRP1 0x0 PRP2 0x0 00:23:15.067 [2024-11-08 02:26:55.776034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.776046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.067 [2024-11-08 02:26:55.776055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.067 [2024-11-08 02:26:55.776064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95808 len:8 PRP1 0x0 PRP2 0x0 00:23:15.067 [2024-11-08 02:26:55.776076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.776088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.067 [2024-11-08 02:26:55.776097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.067 [2024-11-08 02:26:55.776119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95816 len:8 PRP1 0x0 PRP2 0x0 00:23:15.067 [2024-11-08 02:26:55.776131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.776144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.067 [2024-11-08 02:26:55.776153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.067 [2024-11-08 02:26:55.776162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95824 len:8 PRP1 0x0 PRP2 0x0 00:23:15.067 [2024-11-08 02:26:55.776174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.776186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.067 [2024-11-08 02:26:55.776202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.067 [2024-11-08 02:26:55.776212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95832 len:8 PRP1 0x0 PRP2 0x0 00:23:15.067 [2024-11-08 02:26:55.776224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.776236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.067 [2024-11-08 02:26:55.776245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.067 [2024-11-08 02:26:55.776254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95840 len:8 PRP1 0x0 PRP2 0x0 00:23:15.067 [2024-11-08 02:26:55.776266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.067 [2024-11-08 02:26:55.776278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.067 [2024-11-08 02:26:55.776287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.067 [2024-11-08 02:26:55.776297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95848 len:8 PRP1 0x0 PRP2 0x0 00:23:15.067 [2024-11-08 02:26:55.776309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.776321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.068 [2024-11-08 02:26:55.776330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.068 [2024-11-08 02:26:55.776339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95856 len:8 PRP1 0x0 PRP2 0x0 00:23:15.068 [2024-11-08 02:26:55.776352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.776364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.068 [2024-11-08 02:26:55.776373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.068 [2024-11-08 02:26:55.776382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95864 len:8 PRP1 0x0 PRP2 0x0 00:23:15.068 [2024-11-08 02:26:55.776394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.776406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.068 [2024-11-08 02:26:55.776415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.068 [2024-11-08 02:26:55.776425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95872 len:8 PRP1 0x0 PRP2 0x0 00:23:15.068 [2024-11-08 02:26:55.776437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.776448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.068 [2024-11-08 02:26:55.776457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.068 [2024-11-08 02:26:55.776467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95880 len:8 PRP1 0x0 PRP2 0x0 00:23:15.068 [2024-11-08 02:26:55.776479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.776491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.068 [2024-11-08 02:26:55.776499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.068 [2024-11-08 02:26:55.776509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95888 len:8 PRP1 0x0 PRP2 0x0 00:23:15.068 [2024-11-08 02:26:55.776520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.776538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.068 [2024-11-08 02:26:55.776548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.068 [2024-11-08 02:26:55.776557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95896 len:8 PRP1 0x0 PRP2 0x0 00:23:15.068 [2024-11-08 02:26:55.776569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.776581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.068 [2024-11-08 02:26:55.776590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.068 [2024-11-08 02:26:55.776600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95904 len:8 PRP1 0x0 PRP2 0x0 00:23:15.068 [2024-11-08 02:26:55.776612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.776624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.068 [2024-11-08 02:26:55.776633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.068 [2024-11-08 02:26:55.776643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95912 len:8 PRP1 0x0 PRP2 0x0 00:23:15.068 [2024-11-08 02:26:55.776655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.776667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:15.068 [2024-11-08 02:26:55.776676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:15.068 [2024-11-08 02:26:55.776685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95920 len:8 PRP1 0x0 PRP2 0x0 00:23:15.068 [2024-11-08 02:26:55.776697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.776759] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdec8f0 was disconnected and freed. reset controller. 00:23:15.068 [2024-11-08 02:26:55.776896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.068 [2024-11-08 02:26:55.776925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.776940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.068 [2024-11-08 02:26:55.776952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.776965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.068 [2024-11-08 02:26:55.776977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.776990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:15.068 [2024-11-08 02:26:55.777002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.777015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.068 [2024-11-08 02:26:55.777028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.068 [2024-11-08 02:26:55.777045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9370 is same with the state(6) to be set 00:23:15.068 [2024-11-08 02:26:55.778048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:15.068 [2024-11-08 02:26:55.778116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb9370 (9): Bad file descriptor 00:23:15.068 [2024-11-08 02:26:55.778551] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:15.068 [2024-11-08 02:26:55.778588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb9370 with addr=10.0.0.3, port=4421 00:23:15.068 [2024-11-08 02:26:55.778606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb9370 is same with the state(6) to be set 00:23:15.068 [2024-11-08 02:26:55.778641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb9370 (9): Bad file descriptor 00:23:15.068 [2024-11-08 02:26:55.778671] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:15.068 [2024-11-08 02:26:55.778687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:15.068 [2024-11-08 02:26:55.778701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:15.068 [2024-11-08 02:26:55.778731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:15.068 [2024-11-08 02:26:55.778748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:15.068 7354.06 IOPS, 28.73 MiB/s [2024-11-08T02:27:16.952Z] 7414.97 IOPS, 28.96 MiB/s [2024-11-08T02:27:16.952Z] 7479.84 IOPS, 29.22 MiB/s [2024-11-08T02:27:16.952Z] 7543.23 IOPS, 29.47 MiB/s [2024-11-08T02:27:16.952Z] 7597.25 IOPS, 29.68 MiB/s [2024-11-08T02:27:16.952Z] 7647.46 IOPS, 29.87 MiB/s [2024-11-08T02:27:16.952Z] 7690.14 IOPS, 30.04 MiB/s [2024-11-08T02:27:16.952Z] 7726.74 IOPS, 30.18 MiB/s [2024-11-08T02:27:16.952Z] 7774.05 IOPS, 30.37 MiB/s [2024-11-08T02:27:16.952Z] 7821.38 IOPS, 30.55 MiB/s [2024-11-08T02:27:16.952Z] [2024-11-08 02:27:05.832987] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:15.068 7869.57 IOPS, 30.74 MiB/s [2024-11-08T02:27:16.952Z] 7918.47 IOPS, 30.93 MiB/s [2024-11-08T02:27:16.952Z] 7963.00 IOPS, 31.11 MiB/s [2024-11-08T02:27:16.952Z] 8005.71 IOPS, 31.27 MiB/s [2024-11-08T02:27:16.952Z] 8037.28 IOPS, 31.40 MiB/s [2024-11-08T02:27:16.952Z] 8075.76 IOPS, 31.55 MiB/s [2024-11-08T02:27:16.952Z] 8112.46 IOPS, 31.69 MiB/s [2024-11-08T02:27:16.952Z] 8150.19 IOPS, 31.84 MiB/s [2024-11-08T02:27:16.952Z] 8184.44 IOPS, 31.97 MiB/s [2024-11-08T02:27:16.952Z] 8217.45 IOPS, 32.10 MiB/s [2024-11-08T02:27:16.952Z] Received shutdown signal, test time was about 55.503364 seconds 00:23:15.068 00:23:15.068 Latency(us) 00:23:15.068 [2024-11-08T02:27:16.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.068 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:15.068 Verification LBA range: start 0x0 length 0x4000 00:23:15.068 Nvme0n1 : 55.50 8231.11 32.15 0.00 0.00 15521.14 437.53 7015926.69 00:23:15.068 [2024-11-08T02:27:16.952Z] =================================================================================================================== 00:23:15.068 [2024-11-08T02:27:16.952Z] Total : 8231.11 32.15 0.00 0.00 15521.14 437.53 7015926.69 00:23:15.068 [2024-11-08 02:27:15.988807] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:15.068 rmmod nvme_tcp 00:23:15.068 rmmod nvme_fabrics 00:23:15.068 rmmod nvme_keyring 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@513 -- # '[' -n 96029 ']' 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # killprocess 96029 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 96029 ']' 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 96029 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96029 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:15.068 killing process with pid 96029 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96029' 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 96029 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 96029 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-save 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:23:15.068 00:23:15.068 real 1m0.295s 00:23:15.068 user 2m46.667s 00:23:15.068 sys 0m18.279s 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:15.068 02:27:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:15.068 ************************************ 00:23:15.068 END TEST nvmf_host_multipath 00:23:15.068 ************************************ 00:23:15.328 02:27:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:15.328 02:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:15.328 02:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:15.328 02:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.328 ************************************ 00:23:15.328 START TEST nvmf_timeout 00:23:15.328 ************************************ 00:23:15.328 02:27:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:15.328 * Looking for test storage... 00:23:15.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:15.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.329 --rc genhtml_branch_coverage=1 00:23:15.329 --rc genhtml_function_coverage=1 00:23:15.329 --rc genhtml_legend=1 00:23:15.329 --rc geninfo_all_blocks=1 00:23:15.329 --rc geninfo_unexecuted_blocks=1 00:23:15.329 00:23:15.329 ' 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:15.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.329 --rc genhtml_branch_coverage=1 00:23:15.329 --rc genhtml_function_coverage=1 00:23:15.329 --rc genhtml_legend=1 00:23:15.329 --rc geninfo_all_blocks=1 00:23:15.329 --rc geninfo_unexecuted_blocks=1 00:23:15.329 00:23:15.329 ' 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:15.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.329 --rc genhtml_branch_coverage=1 00:23:15.329 --rc genhtml_function_coverage=1 00:23:15.329 --rc genhtml_legend=1 00:23:15.329 --rc geninfo_all_blocks=1 00:23:15.329 --rc geninfo_unexecuted_blocks=1 00:23:15.329 00:23:15.329 ' 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:15.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.329 --rc genhtml_branch_coverage=1 00:23:15.329 --rc genhtml_function_coverage=1 00:23:15.329 --rc genhtml_legend=1 00:23:15.329 --rc geninfo_all_blocks=1 00:23:15.329 --rc geninfo_unexecuted_blocks=1 00:23:15.329 00:23:15.329 ' 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:15.329 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.329 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:15.330 Cannot find device "nvmf_init_br" 00:23:15.330 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:15.589 Cannot find device "nvmf_init_br2" 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:15.589 Cannot find device "nvmf_tgt_br" 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:15.589 Cannot find device "nvmf_tgt_br2" 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:15.589 Cannot find device "nvmf_init_br" 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:15.589 Cannot find device "nvmf_init_br2" 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:15.589 Cannot find device "nvmf_tgt_br" 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:15.589 Cannot find device "nvmf_tgt_br2" 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:15.589 Cannot find device "nvmf_br" 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:15.589 Cannot find device "nvmf_init_if" 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:15.589 Cannot find device "nvmf_init_if2" 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:15.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:15.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:15.589 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:15.590 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:15.590 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:15.590 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:15.590 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:15.590 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:15.590 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:15.590 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:15.590 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:15.590 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:15.590 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:15.590 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:15.590 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:15.849 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:15.849 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:23:15.849 00:23:15.849 --- 10.0.0.3 ping statistics --- 00:23:15.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.849 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:15.849 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:15.849 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:23:15.849 00:23:15.849 --- 10.0.0.4 ping statistics --- 00:23:15.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.849 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:15.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:23:15.849 00:23:15.849 --- 10.0.0.1 ping statistics --- 00:23:15.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.849 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:15.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:23:15.849 00:23:15.849 --- 10.0.0.2 ping statistics --- 00:23:15.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.849 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # return 0 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # nvmfpid=97235 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # waitforlisten 97235 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 97235 ']' 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.849 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:15.849 [2024-11-08 02:27:17.670144] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:15.849 [2024-11-08 02:27:17.670229] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.108 [2024-11-08 02:27:17.811253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:16.108 [2024-11-08 02:27:17.844881] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.108 [2024-11-08 02:27:17.845224] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.108 [2024-11-08 02:27:17.845374] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.108 [2024-11-08 02:27:17.845501] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.108 [2024-11-08 02:27:17.845535] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.108 [2024-11-08 02:27:17.845791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.109 [2024-11-08 02:27:17.845800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.109 [2024-11-08 02:27:17.873578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:16.109 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:16.109 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:16.109 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:16.109 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:16.109 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:16.109 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.109 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.109 02:27:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:16.367 [2024-11-08 02:27:18.237487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.625 02:27:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:16.883 Malloc0 00:23:16.883 02:27:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:17.141 02:27:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:17.399 02:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:17.658 [2024-11-08 02:27:19.362326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:17.658 02:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=97278 00:23:17.658 02:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:17.658 02:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 97278 /var/tmp/bdevperf.sock 00:23:17.658 02:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 97278 ']' 00:23:17.658 02:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.658 02:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:17.658 02:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.658 02:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:17.658 02:27:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:17.658 [2024-11-08 02:27:19.428568] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:17.658 [2024-11-08 02:27:19.428654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97278 ] 00:23:17.915 [2024-11-08 02:27:19.563584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.915 [2024-11-08 02:27:19.598465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.915 [2024-11-08 02:27:19.627907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:18.848 02:27:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.848 02:27:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:18.848 02:27:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:18.848 02:27:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:19.106 NVMe0n1 00:23:19.106 02:27:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97296 00:23:19.106 02:27:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:19.106 02:27:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:23:19.365 Running I/O for 10 seconds... 00:23:20.300 02:27:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:20.560 8656.00 IOPS, 33.81 MiB/s [2024-11-08T02:27:22.445Z] [2024-11-08 02:27:22.218387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.218446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.218476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.218494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.218512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.218530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.218548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.218566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.218583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.218987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.218997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.219005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.219015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.219023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.219033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.561 [2024-11-08 02:27:22.219042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.219051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.219059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.219069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.219078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.219088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.219096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.219106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.219114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.219136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.219145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.219155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.219163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.561 [2024-11-08 02:27:22.219174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.561 [2024-11-08 02:27:22.219182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.562 [2024-11-08 02:27:22.219817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.562 [2024-11-08 02:27:22.219925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.562 [2024-11-08 02:27:22.219935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.563 [2024-11-08 02:27:22.219943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.219953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.563 [2024-11-08 02:27:22.219961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.219970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.219979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.219989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.219997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.563 [2024-11-08 02:27:22.220454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.563 [2024-11-08 02:27:22.220473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.563 [2024-11-08 02:27:22.220492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.563 [2024-11-08 02:27:22.220511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.563 [2024-11-08 02:27:22.220530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.563 [2024-11-08 02:27:22.220548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.563 [2024-11-08 02:27:22.220567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.563 [2024-11-08 02:27:22.220586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe20de0 is same with the state(6) to be set 00:23:20.563 [2024-11-08 02:27:22.220608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.563 [2024-11-08 02:27:22.220615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.563 [2024-11-08 02:27:22.220623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83512 len:8 PRP1 0x0 PRP2 0x0 00:23:20.563 [2024-11-08 02:27:22.220636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.563 [2024-11-08 02:27:22.220668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.563 [2024-11-08 02:27:22.220675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83968 len:8 PRP1 0x0 PRP2 0x0 00:23:20.563 [2024-11-08 02:27:22.220683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.563 [2024-11-08 02:27:22.220698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.563 [2024-11-08 02:27:22.220705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83976 len:8 PRP1 0x0 PRP2 0x0 00:23:20.563 [2024-11-08 02:27:22.220713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.563 [2024-11-08 02:27:22.220722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.563 [2024-11-08 02:27:22.220728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.563 [2024-11-08 02:27:22.220735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83984 len:8 PRP1 0x0 PRP2 0x0 00:23:20.563 [2024-11-08 02:27:22.220743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.220752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.564 [2024-11-08 02:27:22.220759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.564 [2024-11-08 02:27:22.220780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83992 len:8 PRP1 0x0 PRP2 0x0 00:23:20.564 [2024-11-08 02:27:22.220788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.220796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.564 [2024-11-08 02:27:22.220803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.564 [2024-11-08 02:27:22.220810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84000 len:8 PRP1 0x0 PRP2 0x0 00:23:20.564 [2024-11-08 02:27:22.220817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.220825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.564 [2024-11-08 02:27:22.220831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.564 [2024-11-08 02:27:22.220838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84008 len:8 PRP1 0x0 PRP2 0x0 00:23:20.564 [2024-11-08 02:27:22.220846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.220854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.564 [2024-11-08 02:27:22.220860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.564 [2024-11-08 02:27:22.220867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84016 len:8 PRP1 0x0 PRP2 0x0 00:23:20.564 [2024-11-08 02:27:22.220875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.220883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.564 [2024-11-08 02:27:22.220890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.564 [2024-11-08 02:27:22.220898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84024 len:8 PRP1 0x0 PRP2 0x0 00:23:20.564 [2024-11-08 02:27:22.220907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.220916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.564 [2024-11-08 02:27:22.220922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.564 [2024-11-08 02:27:22.220929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84032 len:8 PRP1 0x0 PRP2 0x0 00:23:20.564 [2024-11-08 02:27:22.220937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.220945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.564 [2024-11-08 02:27:22.220951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.564 [2024-11-08 02:27:22.220958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84040 len:8 PRP1 0x0 PRP2 0x0 00:23:20.564 [2024-11-08 02:27:22.220966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.220974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.564 [2024-11-08 02:27:22.220980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.564 [2024-11-08 02:27:22.220987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84048 len:8 PRP1 0x0 PRP2 0x0 00:23:20.564 [2024-11-08 02:27:22.220994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.221003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.564 [2024-11-08 02:27:22.221009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.564 [2024-11-08 02:27:22.221016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84056 len:8 PRP1 0x0 PRP2 0x0 00:23:20.564 [2024-11-08 02:27:22.221023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.221031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.564 [2024-11-08 02:27:22.221038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.564 [2024-11-08 02:27:22.221045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84064 len:8 PRP1 0x0 PRP2 0x0 00:23:20.564 [2024-11-08 02:27:22.221052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.221060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.564 [2024-11-08 02:27:22.221066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.564 [2024-11-08 02:27:22.221073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84072 len:8 PRP1 0x0 PRP2 0x0 00:23:20.564 [2024-11-08 02:27:22.221081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.221089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.564 [2024-11-08 02:27:22.221096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.564 [2024-11-08 02:27:22.221102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84080 len:8 PRP1 0x0 PRP2 0x0 00:23:20.564 [2024-11-08 02:27:22.221110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.221119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.564 [2024-11-08 02:27:22.221126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.564 [2024-11-08 02:27:22.221132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84088 len:8 PRP1 0x0 PRP2 0x0 00:23:20.564 [2024-11-08 02:27:22.221143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.564 [2024-11-08 02:27:22.221191] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe20de0 was disconnected and freed. reset controller. 00:23:20.564 [2024-11-08 02:27:22.221431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:20.564 [2024-11-08 02:27:22.221502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00500 (9): Bad file descriptor 00:23:20.564 [2024-11-08 02:27:22.221597] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.564 [2024-11-08 02:27:22.221618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe00500 with addr=10.0.0.3, port=4420 00:23:20.564 [2024-11-08 02:27:22.221629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe00500 is same with the state(6) to be set 00:23:20.564 [2024-11-08 02:27:22.221659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00500 (9): Bad file descriptor 00:23:20.564 [2024-11-08 02:27:22.221686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:20.564 [2024-11-08 02:27:22.221696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:20.564 [2024-11-08 02:27:22.221707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:20.564 [2024-11-08 02:27:22.221725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.564 [2024-11-08 02:27:22.221736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:20.564 02:27:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:23:22.433 5192.00 IOPS, 20.28 MiB/s [2024-11-08T02:27:24.317Z] 3461.33 IOPS, 13.52 MiB/s [2024-11-08T02:27:24.317Z] [2024-11-08 02:27:24.221843] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.433 [2024-11-08 02:27:24.221902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe00500 with addr=10.0.0.3, port=4420 00:23:22.433 [2024-11-08 02:27:24.221917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe00500 is same with the state(6) to be set 00:23:22.433 [2024-11-08 02:27:24.221938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00500 (9): Bad file descriptor 00:23:22.433 [2024-11-08 02:27:24.221954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:22.433 [2024-11-08 02:27:24.221963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:22.433 [2024-11-08 02:27:24.221972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:22.433 [2024-11-08 02:27:24.221995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.433 [2024-11-08 02:27:24.222005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:22.433 02:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:23:22.433 02:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:22.433 02:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:22.691 02:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:23:22.691 02:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:23:22.691 02:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:22.691 02:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:22.950 02:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:23:22.950 02:27:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:23:24.583 2596.00 IOPS, 10.14 MiB/s [2024-11-08T02:27:26.467Z] 2076.80 IOPS, 8.11 MiB/s [2024-11-08T02:27:26.467Z] [2024-11-08 02:27:26.222210] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.583 [2024-11-08 02:27:26.222293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe00500 with addr=10.0.0.3, port=4420 00:23:24.583 [2024-11-08 02:27:26.222309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe00500 is same with the state(6) to be set 00:23:24.583 [2024-11-08 02:27:26.222331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe00500 (9): Bad file descriptor 00:23:24.583 [2024-11-08 02:27:26.222348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:24.583 [2024-11-08 02:27:26.222357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:24.583 [2024-11-08 02:27:26.222367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:24.583 [2024-11-08 02:27:26.222390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.583 [2024-11-08 02:27:26.222402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:26.550 1730.67 IOPS, 6.76 MiB/s [2024-11-08T02:27:28.434Z] 1483.43 IOPS, 5.79 MiB/s [2024-11-08T02:27:28.434Z] [2024-11-08 02:27:28.222535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:26.550 [2024-11-08 02:27:28.222574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:26.550 [2024-11-08 02:27:28.222585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:26.550 [2024-11-08 02:27:28.222595] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:26.550 [2024-11-08 02:27:28.222619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:27.499 1298.00 IOPS, 5.07 MiB/s 00:23:27.499 Latency(us) 00:23:27.499 [2024-11-08T02:27:29.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.499 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:27.499 Verification LBA range: start 0x0 length 0x4000 00:23:27.500 NVMe0n1 : 8.17 1270.59 4.96 15.66 0.00 99381.25 3112.96 7015926.69 00:23:27.500 [2024-11-08T02:27:29.384Z] =================================================================================================================== 00:23:27.500 [2024-11-08T02:27:29.384Z] Total : 1270.59 4.96 15.66 0.00 99381.25 3112.96 7015926.69 00:23:27.500 { 00:23:27.500 "results": [ 00:23:27.500 { 00:23:27.500 "job": "NVMe0n1", 00:23:27.500 "core_mask": "0x4", 00:23:27.500 "workload": "verify", 00:23:27.500 "status": "finished", 00:23:27.500 "verify_range": { 00:23:27.500 "start": 0, 00:23:27.500 "length": 16384 00:23:27.500 }, 00:23:27.500 "queue_depth": 128, 00:23:27.500 "io_size": 4096, 00:23:27.500 "runtime": 8.172564, 00:23:27.500 "iops": 1270.5926805834742, 00:23:27.500 "mibps": 4.963252658529196, 00:23:27.500 "io_failed": 128, 00:23:27.500 "io_timeout": 0, 00:23:27.500 "avg_latency_us": 99381.25451778054, 00:23:27.500 "min_latency_us": 3112.96, 00:23:27.500 "max_latency_us": 7015926.69090909 00:23:27.500 } 00:23:27.500 ], 00:23:27.500 "core_count": 1 00:23:27.500 } 00:23:28.108 02:27:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:23:28.108 02:27:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:28.108 02:27:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:28.366 02:27:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:28.366 02:27:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:23:28.366 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:28.366 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 97296 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 97278 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 97278 ']' 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 97278 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97278 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:28.625 killing process with pid 97278 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97278' 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 97278 00:23:28.625 Received shutdown signal, test time was about 9.271220 seconds 00:23:28.625 00:23:28.625 Latency(us) 00:23:28.625 [2024-11-08T02:27:30.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.625 [2024-11-08T02:27:30.509Z] =================================================================================================================== 00:23:28.625 [2024-11-08T02:27:30.509Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 97278 00:23:28.625 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:28.884 [2024-11-08 02:27:30.661786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:28.884 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97419 00:23:28.884 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:28.884 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97419 /var/tmp/bdevperf.sock 00:23:28.884 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 97419 ']' 00:23:28.884 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:28.884 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:28.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:28.884 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:28.884 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:28.884 02:27:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:28.884 [2024-11-08 02:27:30.728784] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:28.884 [2024-11-08 02:27:30.728876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97419 ] 00:23:29.143 [2024-11-08 02:27:30.858943] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.143 [2024-11-08 02:27:30.894819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.143 [2024-11-08 02:27:30.924561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:30.078 02:27:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.078 02:27:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:30.078 02:27:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:30.078 02:27:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:30.337 NVMe0n1 00:23:30.337 02:27:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97442 00:23:30.337 02:27:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:30.337 02:27:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:23:30.596 Running I/O for 10 seconds... 00:23:31.531 02:27:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:31.793 8084.00 IOPS, 31.58 MiB/s [2024-11-08T02:27:33.677Z] [2024-11-08 02:27:33.465070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.793 [2024-11-08 02:27:33.465771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.465998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.466007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.466014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.466022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.466030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1a20 is same with the state(6) to be set 00:23:31.794 [2024-11-08 02:27:33.466087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.794 [2024-11-08 02:27:33.466651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.794 [2024-11-08 02:27:33.466660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.466989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.466999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.795 [2024-11-08 02:27:33.467496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.795 [2024-11-08 02:27:33.467504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.467987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.467997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.796 [2024-11-08 02:27:33.468307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.796 [2024-11-08 02:27:33.468317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.797 [2024-11-08 02:27:33.468325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.797 [2024-11-08 02:27:33.468344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.797 [2024-11-08 02:27:33.468363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.797 [2024-11-08 02:27:33.468382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.797 [2024-11-08 02:27:33.468401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.797 [2024-11-08 02:27:33.468434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.797 [2024-11-08 02:27:33.468459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.797 [2024-11-08 02:27:33.468477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:31.797 [2024-11-08 02:27:33.468774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.797 [2024-11-08 02:27:33.468793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea0de0 is same with the state(6) to be set 00:23:31.797 [2024-11-08 02:27:33.468812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:31.797 [2024-11-08 02:27:33.468819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:31.797 [2024-11-08 02:27:33.468830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73888 len:8 PRP1 0x0 PRP2 0x0 00:23:31.797 [2024-11-08 02:27:33.468839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.797 [2024-11-08 02:27:33.468877] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ea0de0 was disconnected and freed. reset controller. 00:23:31.797 [2024-11-08 02:27:33.469162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:31.797 [2024-11-08 02:27:33.469250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e80500 (9): Bad file descriptor 00:23:31.797 [2024-11-08 02:27:33.469349] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.797 [2024-11-08 02:27:33.469370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e80500 with addr=10.0.0.3, port=4420 00:23:31.797 [2024-11-08 02:27:33.469381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e80500 is same with the state(6) to be set 00:23:31.797 [2024-11-08 02:27:33.469398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e80500 (9): Bad file descriptor 00:23:31.797 [2024-11-08 02:27:33.469415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:31.797 [2024-11-08 02:27:33.469424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:31.797 [2024-11-08 02:27:33.469434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:31.797 [2024-11-08 02:27:33.469453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:31.797 [2024-11-08 02:27:33.469464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:31.797 02:27:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:32.733 4562.00 IOPS, 17.82 MiB/s [2024-11-08T02:27:34.617Z] [2024-11-08 02:27:34.469548] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.733 [2024-11-08 02:27:34.469609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e80500 with addr=10.0.0.3, port=4420 00:23:32.733 [2024-11-08 02:27:34.469624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e80500 is same with the state(6) to be set 00:23:32.733 [2024-11-08 02:27:34.469645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e80500 (9): Bad file descriptor 00:23:32.733 [2024-11-08 02:27:34.469660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:32.733 [2024-11-08 02:27:34.469668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:32.733 [2024-11-08 02:27:34.469678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:32.733 [2024-11-08 02:27:34.469699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.733 [2024-11-08 02:27:34.469710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:32.733 02:27:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:32.991 [2024-11-08 02:27:34.749312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:32.991 02:27:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 97442 00:23:33.817 3041.33 IOPS, 11.88 MiB/s [2024-11-08T02:27:35.701Z] [2024-11-08 02:27:35.487111] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:35.688 2281.00 IOPS, 8.91 MiB/s [2024-11-08T02:27:38.507Z] 3659.60 IOPS, 14.30 MiB/s [2024-11-08T02:27:39.442Z] 4868.33 IOPS, 19.02 MiB/s [2024-11-08T02:27:40.379Z] 5735.14 IOPS, 22.40 MiB/s [2024-11-08T02:27:41.762Z] 6389.25 IOPS, 24.96 MiB/s [2024-11-08T02:27:42.331Z] 6886.44 IOPS, 26.90 MiB/s [2024-11-08T02:27:42.590Z] 7293.00 IOPS, 28.49 MiB/s 00:23:40.706 Latency(us) 00:23:40.706 [2024-11-08T02:27:42.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.706 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:40.706 Verification LBA range: start 0x0 length 0x4000 00:23:40.706 NVMe0n1 : 10.01 7296.16 28.50 0.00 0.00 17513.51 1392.64 3035150.89 00:23:40.706 [2024-11-08T02:27:42.590Z] =================================================================================================================== 00:23:40.706 [2024-11-08T02:27:42.590Z] Total : 7296.16 28.50 0.00 0.00 17513.51 1392.64 3035150.89 00:23:40.706 { 00:23:40.706 "results": [ 00:23:40.706 { 00:23:40.706 "job": "NVMe0n1", 00:23:40.706 "core_mask": "0x4", 00:23:40.706 "workload": "verify", 00:23:40.706 "status": "finished", 00:23:40.706 "verify_range": { 00:23:40.706 "start": 0, 00:23:40.706 "length": 16384 00:23:40.706 }, 00:23:40.706 "queue_depth": 128, 00:23:40.706 "io_size": 4096, 00:23:40.706 "runtime": 10.008829, 00:23:40.706 "iops": 7296.158221905879, 00:23:40.706 "mibps": 28.50061805431984, 00:23:40.706 "io_failed": 0, 00:23:40.706 "io_timeout": 0, 00:23:40.706 "avg_latency_us": 17513.514962093202, 00:23:40.706 "min_latency_us": 1392.64, 00:23:40.706 "max_latency_us": 3035150.8945454545 00:23:40.706 } 00:23:40.706 ], 00:23:40.706 "core_count": 1 00:23:40.706 } 00:23:40.706 02:27:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97547 00:23:40.706 02:27:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:40.706 02:27:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:40.706 Running I/O for 10 seconds... 00:23:41.642 02:27:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:41.903 8084.00 IOPS, 31.58 MiB/s [2024-11-08T02:27:43.787Z] [2024-11-08 02:27:43.608229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.903 [2024-11-08 02:27:43.608274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.903 [2024-11-08 02:27:43.608633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.903 [2024-11-08 02:27:43.608643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.608986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.608995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.904 [2024-11-08 02:27:43.609387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.904 [2024-11-08 02:27:43.609396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.609990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.609998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.610009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.610017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.610027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.610035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.905 [2024-11-08 02:27:43.610045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.905 [2024-11-08 02:27:43.610053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:41.906 [2024-11-08 02:27:43.610777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:41.906 [2024-11-08 02:27:43.610796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.906 [2024-11-08 02:27:43.610806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea31d0 is same with the state(6) to be set 00:23:41.907 [2024-11-08 02:27:43.610818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:41.907 [2024-11-08 02:27:43.610825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:41.907 [2024-11-08 02:27:43.610833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73880 len:8 PRP1 0x0 PRP2 0x0 00:23:41.907 [2024-11-08 02:27:43.610843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.907 [2024-11-08 02:27:43.610884] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ea31d0 was disconnected and freed. reset controller. 00:23:41.907 [2024-11-08 02:27:43.611169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:41.907 [2024-11-08 02:27:43.611305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e80500 (9): Bad file descriptor 00:23:41.907 [2024-11-08 02:27:43.611401] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:41.907 [2024-11-08 02:27:43.611424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e80500 with addr=10.0.0.3, port=4420 00:23:41.907 [2024-11-08 02:27:43.611435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e80500 is same with the state(6) to be set 00:23:41.907 [2024-11-08 02:27:43.611457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e80500 (9): Bad file descriptor 00:23:41.907 [2024-11-08 02:27:43.611488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:41.907 [2024-11-08 02:27:43.611497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:41.907 [2024-11-08 02:27:43.611507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:41.907 [2024-11-08 02:27:43.611528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.907 [2024-11-08 02:27:43.611539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:41.907 02:27:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:42.842 4554.00 IOPS, 17.79 MiB/s [2024-11-08T02:27:44.726Z] [2024-11-08 02:27:44.611632] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.842 [2024-11-08 02:27:44.611866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e80500 with addr=10.0.0.3, port=4420 00:23:42.842 [2024-11-08 02:27:44.611890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e80500 is same with the state(6) to be set 00:23:42.842 [2024-11-08 02:27:44.611917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e80500 (9): Bad file descriptor 00:23:42.842 [2024-11-08 02:27:44.611934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:42.842 [2024-11-08 02:27:44.611943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:42.842 [2024-11-08 02:27:44.611953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:42.842 [2024-11-08 02:27:44.611977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.842 [2024-11-08 02:27:44.611988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:43.778 3036.00 IOPS, 11.86 MiB/s [2024-11-08T02:27:45.662Z] [2024-11-08 02:27:45.612065] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.778 [2024-11-08 02:27:45.612329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e80500 with addr=10.0.0.3, port=4420 00:23:43.778 [2024-11-08 02:27:45.612354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e80500 is same with the state(6) to be set 00:23:43.778 [2024-11-08 02:27:45.612379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e80500 (9): Bad file descriptor 00:23:43.778 [2024-11-08 02:27:45.612396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:43.778 [2024-11-08 02:27:45.612405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:43.778 [2024-11-08 02:27:45.612415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:43.778 [2024-11-08 02:27:45.612439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.778 [2024-11-08 02:27:45.612450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.972 2277.00 IOPS, 8.89 MiB/s [2024-11-08T02:27:46.856Z] [2024-11-08 02:27:46.615794] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.972 [2024-11-08 02:27:46.616041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e80500 with addr=10.0.0.3, port=4420 00:23:44.972 [2024-11-08 02:27:46.616066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e80500 is same with the state(6) to be set 00:23:44.972 [2024-11-08 02:27:46.616358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e80500 (9): Bad file descriptor 00:23:44.972 [2024-11-08 02:27:46.616681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.972 [2024-11-08 02:27:46.616693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.972 [2024-11-08 02:27:46.616703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.972 [2024-11-08 02:27:46.620757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.972 [2024-11-08 02:27:46.620786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.972 02:27:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:45.230 [2024-11-08 02:27:46.904615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:45.230 02:27:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 97547 00:23:45.797 1821.60 IOPS, 7.12 MiB/s [2024-11-08T02:27:47.681Z] [2024-11-08 02:27:47.659082] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:47.680 2992.50 IOPS, 11.69 MiB/s [2024-11-08T02:27:50.506Z] 4114.71 IOPS, 16.07 MiB/s [2024-11-08T02:27:51.881Z] 4946.38 IOPS, 19.32 MiB/s [2024-11-08T02:27:52.817Z] 5606.56 IOPS, 21.90 MiB/s [2024-11-08T02:27:52.817Z] 6145.90 IOPS, 24.01 MiB/s 00:23:50.933 Latency(us) 00:23:50.933 [2024-11-08T02:27:52.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.934 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:50.934 Verification LBA range: start 0x0 length 0x4000 00:23:50.934 NVMe0n1 : 10.01 6152.68 24.03 4323.47 0.00 12196.24 625.57 3019898.88 00:23:50.934 [2024-11-08T02:27:52.818Z] =================================================================================================================== 00:23:50.934 [2024-11-08T02:27:52.818Z] Total : 6152.68 24.03 4323.47 0.00 12196.24 0.00 3019898.88 00:23:50.934 { 00:23:50.934 "results": [ 00:23:50.934 { 00:23:50.934 "job": "NVMe0n1", 00:23:50.934 "core_mask": "0x4", 00:23:50.934 "workload": "verify", 00:23:50.934 "status": "finished", 00:23:50.934 "verify_range": { 00:23:50.934 "start": 0, 00:23:50.934 "length": 16384 00:23:50.934 }, 00:23:50.934 "queue_depth": 128, 00:23:50.934 "io_size": 4096, 00:23:50.934 "runtime": 10.00978, 00:23:50.934 "iops": 6152.682676342537, 00:23:50.934 "mibps": 24.033916704463035, 00:23:50.934 "io_failed": 43277, 00:23:50.934 "io_timeout": 0, 00:23:50.934 "avg_latency_us": 12196.236193858018, 00:23:50.934 "min_latency_us": 625.5709090909091, 00:23:50.934 "max_latency_us": 3019898.88 00:23:50.934 } 00:23:50.934 ], 00:23:50.934 "core_count": 1 00:23:50.934 } 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97419 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 97419 ']' 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 97419 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97419 00:23:50.934 killing process with pid 97419 00:23:50.934 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.934 00:23:50.934 Latency(us) 00:23:50.934 [2024-11-08T02:27:52.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.934 [2024-11-08T02:27:52.818Z] =================================================================================================================== 00:23:50.934 [2024-11-08T02:27:52.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97419' 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 97419 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 97419 00:23:50.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97660 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97660 /var/tmp/bdevperf.sock 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 97660 ']' 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:50.934 02:27:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:50.934 [2024-11-08 02:27:52.741834] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:50.934 [2024-11-08 02:27:52.742737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97660 ] 00:23:51.193 [2024-11-08 02:27:52.882537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.193 [2024-11-08 02:27:52.916179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.193 [2024-11-08 02:27:52.944956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:52.128 02:27:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.128 02:27:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:52.128 02:27:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97660 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:52.128 02:27:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97672 00:23:52.128 02:27:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:52.128 02:27:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:52.695 NVMe0n1 00:23:52.695 02:27:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:52.695 02:27:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97719 00:23:52.695 02:27:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:52.695 Running I/O for 10 seconds... 00:23:53.629 02:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:53.890 17399.00 IOPS, 67.96 MiB/s [2024-11-08T02:27:55.774Z] [2024-11-08 02:27:55.578465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.890 [2024-11-08 02:27:55.578726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.890 [2024-11-08 02:27:55.578950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.890 [2024-11-08 02:27:55.579095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.890 [2024-11-08 02:27:55.579272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.890 [2024-11-08 02:27:55.579411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.890 [2024-11-08 02:27:55.579560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.890 [2024-11-08 02:27:55.579793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.890 [2024-11-08 02:27:55.579854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.890 [2024-11-08 02:27:55.580022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.890 [2024-11-08 02:27:55.580076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.890 [2024-11-08 02:27:55.580242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.890 [2024-11-08 02:27:55.580306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.890 [2024-11-08 02:27:55.580317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.890 [2024-11-08 02:27:55.580328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.890 [2024-11-08 02:27:55.580337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.890 [2024-11-08 02:27:55.580347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.890 [2024-11-08 02:27:55.580356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.580985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.580995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.581003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.581012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.581021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.581030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.581040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.581049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.581058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.581068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.581076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.581086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.581094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.581104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.581112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.891 [2024-11-08 02:27:55.581121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.891 [2024-11-08 02:27:55.581130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.581141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.581376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.581449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.581504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.581670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.581799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.581850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.581900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.582983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.582994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.583003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.892 [2024-11-08 02:27:55.583014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.892 [2024-11-08 02:27:55.583023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:53.893 [2024-11-08 02:27:55.583806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.893 [2024-11-08 02:27:55.583815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x190eb80 is same with the state(6) to be set 00:23:53.893 [2024-11-08 02:27:55.583826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:53.893 [2024-11-08 02:27:55.583833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:53.893 [2024-11-08 02:27:55.583842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112408 len:8 PRP1 0x0 PRP2 0x0 00:23:53.894 [2024-11-08 02:27:55.583850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.894 [2024-11-08 02:27:55.583888] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x190eb80 was disconnected and freed. reset controller. 00:23:53.894 [2024-11-08 02:27:55.584167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:53.894 [2024-11-08 02:27:55.584262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ee500 (9): Bad file descriptor 00:23:53.894 [2024-11-08 02:27:55.584381] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:53.894 [2024-11-08 02:27:55.584405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ee500 with addr=10.0.0.3, port=4420 00:23:53.894 [2024-11-08 02:27:55.584418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee500 is same with the state(6) to be set 00:23:53.894 [2024-11-08 02:27:55.584437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ee500 (9): Bad file descriptor 00:23:53.894 [2024-11-08 02:27:55.584452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:53.894 [2024-11-08 02:27:55.584477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:53.894 [2024-11-08 02:27:55.584487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:53.894 [2024-11-08 02:27:55.584506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:53.894 [2024-11-08 02:27:55.584516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:53.894 02:27:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 97719 00:23:55.762 10223.50 IOPS, 39.94 MiB/s [2024-11-08T02:27:57.646Z] 6815.67 IOPS, 26.62 MiB/s [2024-11-08T02:27:57.646Z] [2024-11-08 02:27:57.584644] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:55.762 [2024-11-08 02:27:57.584857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ee500 with addr=10.0.0.3, port=4420 00:23:55.762 [2024-11-08 02:27:57.585009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee500 is same with the state(6) to be set 00:23:55.762 [2024-11-08 02:27:57.585195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ee500 (9): Bad file descriptor 00:23:55.762 [2024-11-08 02:27:57.585342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:55.762 [2024-11-08 02:27:57.585408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:55.762 [2024-11-08 02:27:57.585567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:55.762 [2024-11-08 02:27:57.585625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.762 [2024-11-08 02:27:57.585739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.677 5111.75 IOPS, 19.97 MiB/s [2024-11-08T02:27:59.827Z] 4089.40 IOPS, 15.97 MiB/s [2024-11-08T02:27:59.827Z] [2024-11-08 02:27:59.586009] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.943 [2024-11-08 02:27:59.586238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ee500 with addr=10.0.0.3, port=4420 00:23:57.943 [2024-11-08 02:27:59.586522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee500 is same with the state(6) to be set 00:23:57.943 [2024-11-08 02:27:59.586673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ee500 (9): Bad file descriptor 00:23:57.943 [2024-11-08 02:27:59.586955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.943 [2024-11-08 02:27:59.587101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.943 [2024-11-08 02:27:59.587156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.943 [2024-11-08 02:27:59.587184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.943 [2024-11-08 02:27:59.587197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:59.813 3407.83 IOPS, 13.31 MiB/s [2024-11-08T02:28:01.697Z] 2921.00 IOPS, 11.41 MiB/s [2024-11-08T02:28:01.697Z] [2024-11-08 02:28:01.587293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:59.813 [2024-11-08 02:28:01.587506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:59.813 [2024-11-08 02:28:01.587527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:59.813 [2024-11-08 02:28:01.587537] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:59.813 [2024-11-08 02:28:01.587584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.747 2555.88 IOPS, 9.98 MiB/s 00:24:00.747 Latency(us) 00:24:00.747 [2024-11-08T02:28:02.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.747 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:00.747 NVMe0n1 : 8.19 2498.01 9.76 15.64 0.00 50845.72 6881.28 7046430.72 00:24:00.747 [2024-11-08T02:28:02.631Z] =================================================================================================================== 00:24:00.747 [2024-11-08T02:28:02.631Z] Total : 2498.01 9.76 15.64 0.00 50845.72 6881.28 7046430.72 00:24:00.747 { 00:24:00.747 "results": [ 00:24:00.747 { 00:24:00.747 "job": "NVMe0n1", 00:24:00.747 "core_mask": "0x4", 00:24:00.747 "workload": "randread", 00:24:00.747 "status": "finished", 00:24:00.747 "queue_depth": 128, 00:24:00.747 "io_size": 4096, 00:24:00.747 "runtime": 8.185321, 00:24:00.747 "iops": 2498.008324902591, 00:24:00.747 "mibps": 9.757845019150746, 00:24:00.747 "io_failed": 128, 00:24:00.747 "io_timeout": 0, 00:24:00.747 "avg_latency_us": 50845.71794366508, 00:24:00.747 "min_latency_us": 6881.28, 00:24:00.747 "max_latency_us": 7046430.72 00:24:00.747 } 00:24:00.747 ], 00:24:00.747 "core_count": 1 00:24:00.747 } 00:24:00.747 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.747 Attaching 5 probes... 00:24:00.747 1346.893146: reset bdev controller NVMe0 00:24:00.747 1347.062511: reconnect bdev controller NVMe0 00:24:00.747 3347.288370: reconnect delay bdev controller NVMe0 00:24:00.747 3347.322012: reconnect bdev controller NVMe0 00:24:00.747 5348.636891: reconnect delay bdev controller NVMe0 00:24:00.747 5348.669048: reconnect bdev controller NVMe0 00:24:00.747 7349.977456: reconnect delay bdev controller NVMe0 00:24:00.747 7350.026159: reconnect bdev controller NVMe0 00:24:00.747 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:00.747 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:00.747 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 97672 00:24:00.747 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.747 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97660 00:24:00.747 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 97660 ']' 00:24:00.747 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 97660 00:24:00.747 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:24:00.747 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:00.747 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97660 00:24:01.006 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:01.006 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:01.006 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97660' 00:24:01.006 killing process with pid 97660 00:24:01.006 Received shutdown signal, test time was about 8.254954 seconds 00:24:01.006 00:24:01.006 Latency(us) 00:24:01.006 [2024-11-08T02:28:02.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.006 [2024-11-08T02:28:02.890Z] =================================================================================================================== 00:24:01.006 [2024-11-08T02:28:02.890Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.006 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 97660 00:24:01.006 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 97660 00:24:01.006 02:28:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.264 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:01.264 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:24:01.264 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:01.264 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:24:01.264 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:01.264 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:24:01.264 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.264 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:01.264 rmmod nvme_tcp 00:24:01.264 rmmod nvme_fabrics 00:24:01.264 rmmod nvme_keyring 00:24:01.523 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.523 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:24:01.523 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:24:01.523 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@513 -- # '[' -n 97235 ']' 00:24:01.523 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # killprocess 97235 00:24:01.523 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 97235 ']' 00:24:01.523 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 97235 00:24:01.523 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:24:01.523 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:01.523 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97235 00:24:01.523 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:01.523 killing process with pid 97235 00:24:01.523 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97235' 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 97235 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 97235 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-save 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:01.524 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:24:01.782 00:24:01.782 real 0m46.598s 00:24:01.782 user 2m16.865s 00:24:01.782 sys 0m5.660s 00:24:01.782 ************************************ 00:24:01.782 END TEST nvmf_timeout 00:24:01.782 ************************************ 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:01.782 02:28:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:01.783 02:28:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:24:01.783 02:28:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:01.783 ************************************ 00:24:01.783 END TEST nvmf_host 00:24:01.783 ************************************ 00:24:01.783 00:24:01.783 real 5m43.877s 00:24:01.783 user 16m7.307s 00:24:01.783 sys 1m15.949s 00:24:01.783 02:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:01.783 02:28:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.783 02:28:03 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:24:01.783 02:28:03 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:24:01.783 ************************************ 00:24:01.783 END TEST nvmf_tcp 00:24:01.783 ************************************ 00:24:01.783 00:24:01.783 real 15m3.848s 00:24:01.783 user 39m46.146s 00:24:01.783 sys 3m58.662s 00:24:01.783 02:28:03 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:01.783 02:28:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:02.042 02:28:03 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:24:02.042 02:28:03 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:02.042 02:28:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:02.042 02:28:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:02.042 02:28:03 -- common/autotest_common.sh@10 -- # set +x 00:24:02.042 ************************************ 00:24:02.042 START TEST nvmf_dif 00:24:02.042 ************************************ 00:24:02.042 02:28:03 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:02.042 * Looking for test storage... 00:24:02.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:02.042 02:28:03 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:02.042 02:28:03 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:24:02.042 02:28:03 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:02.042 02:28:03 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:02.042 02:28:03 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:24:02.042 02:28:03 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.042 02:28:03 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:02.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.042 --rc genhtml_branch_coverage=1 00:24:02.042 --rc genhtml_function_coverage=1 00:24:02.042 --rc genhtml_legend=1 00:24:02.042 --rc geninfo_all_blocks=1 00:24:02.042 --rc geninfo_unexecuted_blocks=1 00:24:02.042 00:24:02.042 ' 00:24:02.042 02:28:03 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:02.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.042 --rc genhtml_branch_coverage=1 00:24:02.042 --rc genhtml_function_coverage=1 00:24:02.042 --rc genhtml_legend=1 00:24:02.042 --rc geninfo_all_blocks=1 00:24:02.042 --rc geninfo_unexecuted_blocks=1 00:24:02.042 00:24:02.042 ' 00:24:02.042 02:28:03 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:02.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.042 --rc genhtml_branch_coverage=1 00:24:02.042 --rc genhtml_function_coverage=1 00:24:02.042 --rc genhtml_legend=1 00:24:02.042 --rc geninfo_all_blocks=1 00:24:02.042 --rc geninfo_unexecuted_blocks=1 00:24:02.042 00:24:02.042 ' 00:24:02.042 02:28:03 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:02.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.042 --rc genhtml_branch_coverage=1 00:24:02.042 --rc genhtml_function_coverage=1 00:24:02.042 --rc genhtml_legend=1 00:24:02.042 --rc geninfo_all_blocks=1 00:24:02.042 --rc geninfo_unexecuted_blocks=1 00:24:02.042 00:24:02.042 ' 00:24:02.042 02:28:03 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.042 02:28:03 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:02.301 02:28:03 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:24:02.301 02:28:03 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.301 02:28:03 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.301 02:28:03 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.301 02:28:03 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.301 02:28:03 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.301 02:28:03 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.301 02:28:03 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:24:02.301 02:28:03 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:02.301 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:02.301 02:28:03 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:24:02.301 02:28:03 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:02.301 02:28:03 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:02.301 02:28:03 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:24:02.301 02:28:03 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:02.301 02:28:03 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.301 02:28:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:02.302 02:28:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:02.302 Cannot find device "nvmf_init_br" 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@162 -- # true 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:02.302 Cannot find device "nvmf_init_br2" 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@163 -- # true 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:02.302 Cannot find device "nvmf_tgt_br" 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@164 -- # true 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:02.302 Cannot find device "nvmf_tgt_br2" 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@165 -- # true 00:24:02.302 02:28:03 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:02.302 Cannot find device "nvmf_init_br" 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@166 -- # true 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:02.302 Cannot find device "nvmf_init_br2" 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@167 -- # true 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:02.302 Cannot find device "nvmf_tgt_br" 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@168 -- # true 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:02.302 Cannot find device "nvmf_tgt_br2" 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@169 -- # true 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:02.302 Cannot find device "nvmf_br" 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@170 -- # true 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:02.302 Cannot find device "nvmf_init_if" 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@171 -- # true 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:02.302 Cannot find device "nvmf_init_if2" 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@172 -- # true 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:02.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@173 -- # true 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:02.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@174 -- # true 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:02.302 02:28:04 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:02.561 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:02.561 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:24:02.561 00:24:02.561 --- 10.0.0.3 ping statistics --- 00:24:02.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.561 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:02.561 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:02.561 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:24:02.561 00:24:02.561 --- 10.0.0.4 ping statistics --- 00:24:02.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.561 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:02.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:24:02.561 00:24:02.561 --- 10.0.0.1 ping statistics --- 00:24:02.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.561 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:02.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:24:02.561 00:24:02.561 --- 10.0.0.2 ping statistics --- 00:24:02.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.561 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@457 -- # return 0 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:24:02.561 02:28:04 nvmf_dif -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:02.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:02.820 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:02.820 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:03.080 02:28:04 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.080 02:28:04 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:03.080 02:28:04 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:03.080 02:28:04 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.080 02:28:04 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:03.080 02:28:04 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:03.080 02:28:04 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:03.080 02:28:04 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:24:03.080 02:28:04 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:03.080 02:28:04 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:03.080 02:28:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:03.080 02:28:04 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=98206 00:24:03.080 02:28:04 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:03.080 02:28:04 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 98206 00:24:03.080 02:28:04 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 98206 ']' 00:24:03.080 02:28:04 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.080 02:28:04 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:03.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.080 02:28:04 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.080 02:28:04 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:03.080 02:28:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:03.080 [2024-11-08 02:28:04.820943] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:03.080 [2024-11-08 02:28:04.821042] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.080 [2024-11-08 02:28:04.960566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.340 [2024-11-08 02:28:05.004154] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.340 [2024-11-08 02:28:05.004206] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.340 [2024-11-08 02:28:05.004220] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.340 [2024-11-08 02:28:05.004231] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.340 [2024-11-08 02:28:05.004240] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.340 [2024-11-08 02:28:05.004272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.340 [2024-11-08 02:28:05.040816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:03.340 02:28:05 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:03.340 02:28:05 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:24:03.340 02:28:05 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:03.340 02:28:05 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.340 02:28:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:03.340 02:28:05 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.340 02:28:05 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:24:03.340 02:28:05 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:03.340 02:28:05 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.340 02:28:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:03.340 [2024-11-08 02:28:05.136923] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.340 02:28:05 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.340 02:28:05 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:03.340 02:28:05 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:03.340 02:28:05 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:03.340 02:28:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:03.340 ************************************ 00:24:03.340 START TEST fio_dif_1_default 00:24:03.340 ************************************ 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:03.340 bdev_null0 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:03.340 [2024-11-08 02:28:05.181066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:03.340 { 00:24:03.340 "params": { 00:24:03.340 "name": "Nvme$subsystem", 00:24:03.340 "trtype": "$TEST_TRANSPORT", 00:24:03.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.340 "adrfam": "ipv4", 00:24:03.340 "trsvcid": "$NVMF_PORT", 00:24:03.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.340 "hdgst": ${hdgst:-false}, 00:24:03.340 "ddgst": ${ddgst:-false} 00:24:03.340 }, 00:24:03.340 "method": "bdev_nvme_attach_controller" 00:24:03.340 } 00:24:03.340 EOF 00:24:03.340 )") 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:24:03.340 02:28:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:24:03.341 02:28:05 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:03.341 "params": { 00:24:03.341 "name": "Nvme0", 00:24:03.341 "trtype": "tcp", 00:24:03.341 "traddr": "10.0.0.3", 00:24:03.341 "adrfam": "ipv4", 00:24:03.341 "trsvcid": "4420", 00:24:03.341 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.341 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:03.341 "hdgst": false, 00:24:03.341 "ddgst": false 00:24:03.341 }, 00:24:03.341 "method": "bdev_nvme_attach_controller" 00:24:03.341 }' 00:24:03.341 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:03.341 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:03.341 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.600 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:03.600 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:03.600 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:03.600 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:03.600 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:03.600 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:03.600 02:28:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:03.600 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:03.600 fio-3.35 00:24:03.600 Starting 1 thread 00:24:15.809 00:24:15.809 filename0: (groupid=0, jobs=1): err= 0: pid=98265: Fri Nov 8 02:28:15 2024 00:24:15.809 read: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(392MiB/10001msec) 00:24:15.809 slat (nsec): min=5842, max=76871, avg=7560.64, stdev=3287.82 00:24:15.809 clat (usec): min=313, max=3629, avg=375.66, stdev=43.96 00:24:15.809 lat (usec): min=319, max=3655, avg=383.22, stdev=44.72 00:24:15.809 clat percentiles (usec): 00:24:15.809 | 1.00th=[ 322], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 343], 00:24:15.810 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 379], 00:24:15.810 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 445], 00:24:15.810 | 99.00th=[ 502], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 603], 00:24:15.810 | 99.99th=[ 693] 00:24:15.810 bw ( KiB/s): min=37376, max=41280, per=100.00%, avg=40208.84, stdev=835.23, samples=19 00:24:15.810 iops : min= 9344, max=10320, avg=10052.21, stdev=208.81, samples=19 00:24:15.810 lat (usec) : 500=98.90%, 750=1.09% 00:24:15.810 lat (msec) : 4=0.01% 00:24:15.810 cpu : usr=85.14%, sys=12.97%, ctx=120, majf=0, minf=0 00:24:15.810 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.810 issued rwts: total=100472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.810 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:15.810 00:24:15.810 Run status group 0 (all jobs): 00:24:15.810 READ: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=392MiB (412MB), run=10001-10001msec 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 ************************************ 00:24:15.810 END TEST fio_dif_1_default 00:24:15.810 ************************************ 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.810 00:24:15.810 real 0m10.870s 00:24:15.810 user 0m9.089s 00:24:15.810 sys 0m1.523s 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 02:28:16 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:24:15.810 02:28:16 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:15.810 02:28:16 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:15.810 02:28:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 ************************************ 00:24:15.810 START TEST fio_dif_1_multi_subsystems 00:24:15.810 ************************************ 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 bdev_null0 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 [2024-11-08 02:28:16.106794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 bdev_null1 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:24:15.810 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:15.811 { 00:24:15.811 "params": { 00:24:15.811 "name": "Nvme$subsystem", 00:24:15.811 "trtype": "$TEST_TRANSPORT", 00:24:15.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.811 "adrfam": "ipv4", 00:24:15.811 "trsvcid": "$NVMF_PORT", 00:24:15.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.811 "hdgst": ${hdgst:-false}, 00:24:15.811 "ddgst": ${ddgst:-false} 00:24:15.811 }, 00:24:15.811 "method": "bdev_nvme_attach_controller" 00:24:15.811 } 00:24:15.811 EOF 00:24:15.811 )") 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:15.811 { 00:24:15.811 "params": { 00:24:15.811 "name": "Nvme$subsystem", 00:24:15.811 "trtype": "$TEST_TRANSPORT", 00:24:15.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.811 "adrfam": "ipv4", 00:24:15.811 "trsvcid": "$NVMF_PORT", 00:24:15.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.811 "hdgst": ${hdgst:-false}, 00:24:15.811 "ddgst": ${ddgst:-false} 00:24:15.811 }, 00:24:15.811 "method": "bdev_nvme_attach_controller" 00:24:15.811 } 00:24:15.811 EOF 00:24:15.811 )") 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:15.811 "params": { 00:24:15.811 "name": "Nvme0", 00:24:15.811 "trtype": "tcp", 00:24:15.811 "traddr": "10.0.0.3", 00:24:15.811 "adrfam": "ipv4", 00:24:15.811 "trsvcid": "4420", 00:24:15.811 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.811 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:15.811 "hdgst": false, 00:24:15.811 "ddgst": false 00:24:15.811 }, 00:24:15.811 "method": "bdev_nvme_attach_controller" 00:24:15.811 },{ 00:24:15.811 "params": { 00:24:15.811 "name": "Nvme1", 00:24:15.811 "trtype": "tcp", 00:24:15.811 "traddr": "10.0.0.3", 00:24:15.811 "adrfam": "ipv4", 00:24:15.811 "trsvcid": "4420", 00:24:15.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.811 "hdgst": false, 00:24:15.811 "ddgst": false 00:24:15.811 }, 00:24:15.811 "method": "bdev_nvme_attach_controller" 00:24:15.811 }' 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:15.811 02:28:16 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.811 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:15.811 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:15.811 fio-3.35 00:24:15.811 Starting 2 threads 00:24:25.790 00:24:25.790 filename0: (groupid=0, jobs=1): err= 0: pid=98425: Fri Nov 8 02:28:26 2024 00:24:25.790 read: IOPS=5363, BW=21.0MiB/s (22.0MB/s)(210MiB/10001msec) 00:24:25.790 slat (nsec): min=6222, max=75490, avg=12833.91, stdev=4581.10 00:24:25.790 clat (usec): min=435, max=1426, avg=709.93, stdev=48.80 00:24:25.790 lat (usec): min=441, max=1449, avg=722.76, stdev=49.49 00:24:25.790 clat percentiles (usec): 00:24:25.790 | 1.00th=[ 635], 5.00th=[ 652], 10.00th=[ 660], 20.00th=[ 668], 00:24:25.790 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 709], 00:24:25.790 | 70.00th=[ 725], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 799], 00:24:25.790 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 963], 99.95th=[ 1012], 00:24:25.790 | 99.99th=[ 1123] 00:24:25.790 bw ( KiB/s): min=20800, max=21856, per=50.03%, avg=21466.95, stdev=270.53, samples=19 00:24:25.790 iops : min= 5200, max= 5464, avg=5366.74, stdev=67.63, samples=19 00:24:25.790 lat (usec) : 500=0.01%, 750=82.98%, 1000=16.95% 00:24:25.790 lat (msec) : 2=0.06% 00:24:25.790 cpu : usr=89.37%, sys=9.24%, ctx=75, majf=0, minf=0 00:24:25.790 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:25.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.790 issued rwts: total=53640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.790 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:25.790 filename1: (groupid=0, jobs=1): err= 0: pid=98426: Fri Nov 8 02:28:26 2024 00:24:25.790 read: IOPS=5363, BW=20.9MiB/s (22.0MB/s)(210MiB/10001msec) 00:24:25.790 slat (nsec): min=6279, max=64964, avg=12602.25, stdev=4391.53 00:24:25.790 clat (usec): min=557, max=1556, avg=711.87, stdev=55.00 00:24:25.790 lat (usec): min=563, max=1582, avg=724.48, stdev=55.98 00:24:25.790 clat percentiles (usec): 00:24:25.790 | 1.00th=[ 603], 5.00th=[ 635], 10.00th=[ 652], 20.00th=[ 668], 00:24:25.790 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 709], 60.00th=[ 717], 00:24:25.790 | 70.00th=[ 734], 80.00th=[ 750], 90.00th=[ 783], 95.00th=[ 807], 00:24:25.790 | 99.00th=[ 881], 99.50th=[ 906], 99.90th=[ 971], 99.95th=[ 1020], 00:24:25.790 | 99.99th=[ 1123] 00:24:25.790 bw ( KiB/s): min=20800, max=21856, per=50.03%, avg=21465.26, stdev=270.60, samples=19 00:24:25.790 iops : min= 5200, max= 5464, avg=5366.32, stdev=67.65, samples=19 00:24:25.790 lat (usec) : 750=80.04%, 1000=19.90% 00:24:25.790 lat (msec) : 2=0.06% 00:24:25.790 cpu : usr=89.81%, sys=8.86%, ctx=72, majf=0, minf=0 00:24:25.790 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:25.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:25.790 issued rwts: total=53636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:25.790 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:25.790 00:24:25.790 Run status group 0 (all jobs): 00:24:25.790 READ: bw=41.9MiB/s (43.9MB/s), 20.9MiB/s-21.0MiB/s (22.0MB/s-22.0MB/s), io=419MiB (439MB), run=10001-10001msec 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.790 ************************************ 00:24:25.790 END TEST fio_dif_1_multi_subsystems 00:24:25.790 ************************************ 00:24:25.790 00:24:25.790 real 0m11.000s 00:24:25.790 user 0m18.598s 00:24:25.790 sys 0m2.041s 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:25.790 02:28:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:25.790 02:28:27 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:24:25.790 02:28:27 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:25.790 02:28:27 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:25.790 02:28:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:25.790 ************************************ 00:24:25.790 START TEST fio_dif_rand_params 00:24:25.790 ************************************ 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:25.790 bdev_null0 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:25.790 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:25.791 [2024-11-08 02:28:27.164733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:25.791 { 00:24:25.791 "params": { 00:24:25.791 "name": "Nvme$subsystem", 00:24:25.791 "trtype": "$TEST_TRANSPORT", 00:24:25.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.791 "adrfam": "ipv4", 00:24:25.791 "trsvcid": "$NVMF_PORT", 00:24:25.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.791 "hdgst": ${hdgst:-false}, 00:24:25.791 "ddgst": ${ddgst:-false} 00:24:25.791 }, 00:24:25.791 "method": "bdev_nvme_attach_controller" 00:24:25.791 } 00:24:25.791 EOF 00:24:25.791 )") 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:25.791 "params": { 00:24:25.791 "name": "Nvme0", 00:24:25.791 "trtype": "tcp", 00:24:25.791 "traddr": "10.0.0.3", 00:24:25.791 "adrfam": "ipv4", 00:24:25.791 "trsvcid": "4420", 00:24:25.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:25.791 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:25.791 "hdgst": false, 00:24:25.791 "ddgst": false 00:24:25.791 }, 00:24:25.791 "method": "bdev_nvme_attach_controller" 00:24:25.791 }' 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:25.791 02:28:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:25.791 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:25.791 ... 00:24:25.791 fio-3.35 00:24:25.791 Starting 3 threads 00:24:31.059 00:24:31.059 filename0: (groupid=0, jobs=1): err= 0: pid=98582: Fri Nov 8 02:28:32 2024 00:24:31.059 read: IOPS=290, BW=36.3MiB/s (38.0MB/s)(182MiB/5005msec) 00:24:31.059 slat (nsec): min=6498, max=63366, avg=14486.65, stdev=5558.57 00:24:31.059 clat (usec): min=5517, max=11751, avg=10308.90, stdev=391.98 00:24:31.059 lat (usec): min=5524, max=11779, avg=10323.38, stdev=392.72 00:24:31.059 clat percentiles (usec): 00:24:31.059 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10028], 20.00th=[10028], 00:24:31.059 | 30.00th=[10159], 40.00th=[10159], 50.00th=[10159], 60.00th=[10290], 00:24:31.059 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10814], 95.00th=[10945], 00:24:31.059 | 99.00th=[11600], 99.50th=[11600], 99.90th=[11731], 99.95th=[11731], 00:24:31.059 | 99.99th=[11731] 00:24:31.059 bw ( KiB/s): min=36790, max=37632, per=33.33%, avg=37111.78, stdev=390.90, samples=9 00:24:31.059 iops : min= 287, max= 294, avg=289.89, stdev= 3.10, samples=9 00:24:31.059 lat (msec) : 10=4.13%, 20=95.87% 00:24:31.059 cpu : usr=90.41%, sys=9.03%, ctx=6, majf=0, minf=0 00:24:31.059 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:31.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.059 issued rwts: total=1452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.059 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:31.059 filename0: (groupid=0, jobs=1): err= 0: pid=98583: Fri Nov 8 02:28:32 2024 00:24:31.059 read: IOPS=289, BW=36.2MiB/s (38.0MB/s)(182MiB/5007msec) 00:24:31.059 slat (nsec): min=6625, max=65303, avg=14876.85, stdev=5004.21 00:24:31.059 clat (usec): min=7358, max=12800, avg=10311.48, stdev=388.46 00:24:31.059 lat (usec): min=7371, max=12825, avg=10326.36, stdev=388.79 00:24:31.059 clat percentiles (usec): 00:24:31.059 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10028], 20.00th=[10028], 00:24:31.059 | 30.00th=[10159], 40.00th=[10159], 50.00th=[10159], 60.00th=[10290], 00:24:31.059 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[11076], 00:24:31.059 | 99.00th=[11600], 99.50th=[11731], 99.90th=[12780], 99.95th=[12780], 00:24:31.059 | 99.99th=[12780] 00:24:31.059 bw ( KiB/s): min=36096, max=38400, per=33.30%, avg=37087.00, stdev=731.56, samples=10 00:24:31.059 iops : min= 282, max= 300, avg=289.70, stdev= 5.74, samples=10 00:24:31.059 lat (msec) : 10=3.44%, 20=96.56% 00:24:31.059 cpu : usr=91.21%, sys=8.23%, ctx=9, majf=0, minf=0 00:24:31.059 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:31.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.059 issued rwts: total=1452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.059 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:31.059 filename0: (groupid=0, jobs=1): err= 0: pid=98584: Fri Nov 8 02:28:32 2024 00:24:31.059 read: IOPS=289, BW=36.2MiB/s (38.0MB/s)(182MiB/5007msec) 00:24:31.059 slat (nsec): min=6604, max=63583, avg=14962.05, stdev=5167.33 00:24:31.059 clat (usec): min=7354, max=11762, avg=10312.00, stdev=362.06 00:24:31.059 lat (usec): min=7367, max=11781, avg=10326.96, stdev=362.61 00:24:31.059 clat percentiles (usec): 00:24:31.059 | 1.00th=[ 9896], 5.00th=[10028], 10.00th=[10028], 20.00th=[10028], 00:24:31.059 | 30.00th=[10159], 40.00th=[10159], 50.00th=[10159], 60.00th=[10290], 00:24:31.059 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[11076], 00:24:31.059 | 99.00th=[11600], 99.50th=[11600], 99.90th=[11731], 99.95th=[11731], 00:24:31.059 | 99.99th=[11731] 00:24:31.059 bw ( KiB/s): min=36096, max=38400, per=33.30%, avg=37087.00, stdev=731.56, samples=10 00:24:31.059 iops : min= 282, max= 300, avg=289.70, stdev= 5.74, samples=10 00:24:31.059 lat (msec) : 10=3.44%, 20=96.56% 00:24:31.059 cpu : usr=90.05%, sys=9.15%, ctx=65, majf=0, minf=0 00:24:31.059 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:31.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:31.059 issued rwts: total=1452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:31.059 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:31.059 00:24:31.059 Run status group 0 (all jobs): 00:24:31.059 READ: bw=109MiB/s (114MB/s), 36.2MiB/s-36.3MiB/s (38.0MB/s-38.0MB/s), io=545MiB (571MB), run=5005-5007msec 00:24:31.319 02:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:31.319 02:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:31.319 02:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:31.319 02:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:31.319 02:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:31.319 02:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:31.319 02:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.319 02:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.319 02:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.319 02:28:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:31.319 02:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.319 02:28:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.319 bdev_null0 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.319 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.320 [2024-11-08 02:28:33.033403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.320 bdev_null1 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.320 bdev_null2 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:31.320 { 00:24:31.320 "params": { 00:24:31.320 "name": "Nvme$subsystem", 00:24:31.320 "trtype": "$TEST_TRANSPORT", 00:24:31.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.320 "adrfam": "ipv4", 00:24:31.320 "trsvcid": "$NVMF_PORT", 00:24:31.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.320 "hdgst": ${hdgst:-false}, 00:24:31.320 "ddgst": ${ddgst:-false} 00:24:31.320 }, 00:24:31.320 "method": "bdev_nvme_attach_controller" 00:24:31.320 } 00:24:31.320 EOF 00:24:31.320 )") 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:31.320 { 00:24:31.320 "params": { 00:24:31.320 "name": "Nvme$subsystem", 00:24:31.320 "trtype": "$TEST_TRANSPORT", 00:24:31.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.320 "adrfam": "ipv4", 00:24:31.320 "trsvcid": "$NVMF_PORT", 00:24:31.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.320 "hdgst": ${hdgst:-false}, 00:24:31.320 "ddgst": ${ddgst:-false} 00:24:31.320 }, 00:24:31.320 "method": "bdev_nvme_attach_controller" 00:24:31.320 } 00:24:31.320 EOF 00:24:31.320 )") 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:31.320 { 00:24:31.320 "params": { 00:24:31.320 "name": "Nvme$subsystem", 00:24:31.320 "trtype": "$TEST_TRANSPORT", 00:24:31.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:31.320 "adrfam": "ipv4", 00:24:31.320 "trsvcid": "$NVMF_PORT", 00:24:31.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:31.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:31.320 "hdgst": ${hdgst:-false}, 00:24:31.320 "ddgst": ${ddgst:-false} 00:24:31.320 }, 00:24:31.320 "method": "bdev_nvme_attach_controller" 00:24:31.320 } 00:24:31.320 EOF 00:24:31.320 )") 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:31.320 02:28:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:31.320 "params": { 00:24:31.320 "name": "Nvme0", 00:24:31.320 "trtype": "tcp", 00:24:31.320 "traddr": "10.0.0.3", 00:24:31.320 "adrfam": "ipv4", 00:24:31.320 "trsvcid": "4420", 00:24:31.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:31.320 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:31.320 "hdgst": false, 00:24:31.320 "ddgst": false 00:24:31.320 }, 00:24:31.320 "method": "bdev_nvme_attach_controller" 00:24:31.320 },{ 00:24:31.320 "params": { 00:24:31.321 "name": "Nvme1", 00:24:31.321 "trtype": "tcp", 00:24:31.321 "traddr": "10.0.0.3", 00:24:31.321 "adrfam": "ipv4", 00:24:31.321 "trsvcid": "4420", 00:24:31.321 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.321 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:31.321 "hdgst": false, 00:24:31.321 "ddgst": false 00:24:31.321 }, 00:24:31.321 "method": "bdev_nvme_attach_controller" 00:24:31.321 },{ 00:24:31.321 "params": { 00:24:31.321 "name": "Nvme2", 00:24:31.321 "trtype": "tcp", 00:24:31.321 "traddr": "10.0.0.3", 00:24:31.321 "adrfam": "ipv4", 00:24:31.321 "trsvcid": "4420", 00:24:31.321 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:31.321 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:31.321 "hdgst": false, 00:24:31.321 "ddgst": false 00:24:31.321 }, 00:24:31.321 "method": "bdev_nvme_attach_controller" 00:24:31.321 }' 00:24:31.321 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:31.321 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:31.321 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.321 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:31.321 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:31.321 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:31.321 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:31.321 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:31.321 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:31.321 02:28:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:31.580 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:31.580 ... 00:24:31.580 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:31.580 ... 00:24:31.580 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:31.580 ... 00:24:31.580 fio-3.35 00:24:31.580 Starting 24 threads 00:24:43.795 00:24:43.795 filename0: (groupid=0, jobs=1): err= 0: pid=98679: Fri Nov 8 02:28:43 2024 00:24:43.795 read: IOPS=217, BW=871KiB/s (892kB/s)(8732KiB/10022msec) 00:24:43.795 slat (usec): min=3, max=8025, avg=31.18, stdev=311.84 00:24:43.795 clat (msec): min=26, max=163, avg=73.28, stdev=20.91 00:24:43.795 lat (msec): min=26, max=163, avg=73.31, stdev=20.91 00:24:43.795 clat percentiles (msec): 00:24:43.795 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 55], 00:24:43.795 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:24:43.795 | 70.00th=[ 81], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 112], 00:24:43.795 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 165], 99.95th=[ 165], 00:24:43.795 | 99.99th=[ 165] 00:24:43.795 bw ( KiB/s): min= 616, max= 1048, per=4.25%, avg=869.20, stdev=126.02, samples=20 00:24:43.795 iops : min= 154, max= 262, avg=217.30, stdev=31.50, samples=20 00:24:43.795 lat (msec) : 50=14.43%, 100=73.75%, 250=11.82% 00:24:43.795 cpu : usr=40.19%, sys=2.48%, ctx=1183, majf=0, minf=9 00:24:43.795 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:24:43.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.795 complete : 0=0.0%, 4=88.0%, 8=11.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.795 issued rwts: total=2183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.795 filename0: (groupid=0, jobs=1): err= 0: pid=98680: Fri Nov 8 02:28:43 2024 00:24:43.795 read: IOPS=223, BW=894KiB/s (916kB/s)(8968KiB/10027msec) 00:24:43.795 slat (usec): min=3, max=4026, avg=17.50, stdev=84.85 00:24:43.795 clat (msec): min=24, max=161, avg=71.43, stdev=21.34 00:24:43.795 lat (msec): min=24, max=161, avg=71.44, stdev=21.34 00:24:43.795 clat percentiles (msec): 00:24:43.795 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 52], 00:24:43.795 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 74], 00:24:43.795 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 104], 95.00th=[ 110], 00:24:43.795 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 161], 99.95th=[ 161], 00:24:43.795 | 99.99th=[ 161] 00:24:43.795 bw ( KiB/s): min= 616, max= 1152, per=4.35%, avg=890.40, stdev=126.36, samples=20 00:24:43.795 iops : min= 154, max= 288, avg=222.60, stdev=31.59, samples=20 00:24:43.795 lat (msec) : 50=17.48%, 100=71.19%, 250=11.33% 00:24:43.795 cpu : usr=42.70%, sys=2.32%, ctx=1282, majf=0, minf=9 00:24:43.795 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:43.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.795 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.795 issued rwts: total=2242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.795 filename0: (groupid=0, jobs=1): err= 0: pid=98681: Fri Nov 8 02:28:43 2024 00:24:43.795 read: IOPS=225, BW=901KiB/s (923kB/s)(9016KiB/10003msec) 00:24:43.795 slat (usec): min=4, max=12027, avg=27.24, stdev=347.64 00:24:43.795 clat (msec): min=12, max=159, avg=70.87, stdev=21.96 00:24:43.795 lat (msec): min=12, max=159, avg=70.90, stdev=21.96 00:24:43.795 clat percentiles (msec): 00:24:43.795 | 1.00th=[ 26], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 48], 00:24:43.795 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 72], 00:24:43.795 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 109], 00:24:43.795 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 159], 00:24:43.795 | 99.99th=[ 159] 00:24:43.795 bw ( KiB/s): min= 568, max= 1080, per=4.32%, avg=885.89, stdev=125.41, samples=19 00:24:43.795 iops : min= 142, max= 270, avg=221.47, stdev=31.35, samples=19 00:24:43.795 lat (msec) : 20=0.58%, 50=23.47%, 100=65.08%, 250=10.87% 00:24:43.795 cpu : usr=31.06%, sys=1.95%, ctx=876, majf=0, minf=9 00:24:43.796 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:43.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.796 filename0: (groupid=0, jobs=1): err= 0: pid=98682: Fri Nov 8 02:28:43 2024 00:24:43.796 read: IOPS=215, BW=863KiB/s (884kB/s)(8660KiB/10036msec) 00:24:43.796 slat (usec): min=6, max=8026, avg=21.21, stdev=243.48 00:24:43.796 clat (msec): min=23, max=167, avg=74.01, stdev=20.81 00:24:43.796 lat (msec): min=23, max=167, avg=74.03, stdev=20.81 00:24:43.796 clat percentiles (msec): 00:24:43.796 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:24:43.796 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:24:43.796 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 106], 95.00th=[ 109], 00:24:43.796 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 167], 00:24:43.796 | 99.99th=[ 167] 00:24:43.796 bw ( KiB/s): min= 592, max= 1120, per=4.21%, avg=861.75, stdev=113.27, samples=20 00:24:43.796 iops : min= 148, max= 280, avg=215.40, stdev=28.34, samples=20 00:24:43.796 lat (msec) : 50=16.44%, 100=71.96%, 250=11.59% 00:24:43.796 cpu : usr=31.51%, sys=1.68%, ctx=847, majf=0, minf=9 00:24:43.796 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:24:43.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.796 filename0: (groupid=0, jobs=1): err= 0: pid=98683: Fri Nov 8 02:28:43 2024 00:24:43.796 read: IOPS=218, BW=873KiB/s (893kB/s)(8760KiB/10040msec) 00:24:43.796 slat (usec): min=4, max=4021, avg=15.48, stdev=88.57 00:24:43.796 clat (msec): min=2, max=198, avg=73.17, stdev=23.72 00:24:43.796 lat (msec): min=2, max=198, avg=73.19, stdev=23.72 00:24:43.796 clat percentiles (msec): 00:24:43.796 | 1.00th=[ 5], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 56], 00:24:43.796 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 77], 00:24:43.796 | 70.00th=[ 83], 80.00th=[ 93], 90.00th=[ 106], 95.00th=[ 110], 00:24:43.796 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 165], 99.95th=[ 165], 00:24:43.796 | 99.99th=[ 199] 00:24:43.796 bw ( KiB/s): min= 560, max= 1436, per=4.26%, avg=871.80, stdev=174.26, samples=20 00:24:43.796 iops : min= 140, max= 359, avg=217.95, stdev=43.57, samples=20 00:24:43.796 lat (msec) : 4=0.73%, 10=2.19%, 50=11.32%, 100=72.51%, 250=13.24% 00:24:43.796 cpu : usr=40.31%, sys=2.19%, ctx=1220, majf=0, minf=0 00:24:43.796 IO depths : 1=0.2%, 2=0.5%, 4=1.2%, 8=81.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:24:43.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 issued rwts: total=2190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.796 filename0: (groupid=0, jobs=1): err= 0: pid=98684: Fri Nov 8 02:28:43 2024 00:24:43.796 read: IOPS=221, BW=885KiB/s (906kB/s)(8868KiB/10025msec) 00:24:43.796 slat (usec): min=4, max=8033, avg=24.58, stdev=255.33 00:24:43.796 clat (msec): min=27, max=158, avg=72.20, stdev=21.42 00:24:43.796 lat (msec): min=27, max=158, avg=72.22, stdev=21.42 00:24:43.796 clat percentiles (msec): 00:24:43.796 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:24:43.796 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:24:43.796 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 104], 95.00th=[ 110], 00:24:43.796 | 99.00th=[ 133], 99.50th=[ 140], 99.90th=[ 159], 99.95th=[ 159], 00:24:43.796 | 99.99th=[ 159] 00:24:43.796 bw ( KiB/s): min= 616, max= 1065, per=4.31%, avg=882.45, stdev=138.44, samples=20 00:24:43.796 iops : min= 154, max= 266, avg=220.60, stdev=34.59, samples=20 00:24:43.796 lat (msec) : 50=20.03%, 100=68.29%, 250=11.68% 00:24:43.796 cpu : usr=37.48%, sys=2.04%, ctx=1049, majf=0, minf=9 00:24:43.796 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:43.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 complete : 0=0.0%, 4=87.4%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 issued rwts: total=2217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.796 filename0: (groupid=0, jobs=1): err= 0: pid=98685: Fri Nov 8 02:28:43 2024 00:24:43.796 read: IOPS=224, BW=896KiB/s (918kB/s)(8984KiB/10024msec) 00:24:43.796 slat (usec): min=3, max=4026, avg=18.60, stdev=115.40 00:24:43.796 clat (msec): min=24, max=157, avg=71.28, stdev=21.66 00:24:43.796 lat (msec): min=24, max=157, avg=71.30, stdev=21.66 00:24:43.796 clat percentiles (msec): 00:24:43.796 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 51], 00:24:43.796 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 74], 00:24:43.796 | 70.00th=[ 80], 80.00th=[ 86], 90.00th=[ 104], 95.00th=[ 109], 00:24:43.796 | 99.00th=[ 140], 99.50th=[ 146], 99.90th=[ 159], 99.95th=[ 159], 00:24:43.796 | 99.99th=[ 159] 00:24:43.796 bw ( KiB/s): min= 616, max= 1152, per=4.37%, avg=894.50, stdev=136.01, samples=20 00:24:43.796 iops : min= 154, max= 288, avg=223.60, stdev=34.01, samples=20 00:24:43.796 lat (msec) : 50=19.15%, 100=69.37%, 250=11.49% 00:24:43.796 cpu : usr=40.30%, sys=2.22%, ctx=1226, majf=0, minf=9 00:24:43.796 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:43.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 issued rwts: total=2246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.796 filename0: (groupid=0, jobs=1): err= 0: pid=98686: Fri Nov 8 02:28:43 2024 00:24:43.796 read: IOPS=216, BW=867KiB/s (888kB/s)(8672KiB/10001msec) 00:24:43.796 slat (usec): min=6, max=8026, avg=26.76, stdev=266.95 00:24:43.796 clat (usec): min=483, max=156855, avg=73670.97, stdev=22391.21 00:24:43.796 lat (usec): min=490, max=156870, avg=73697.73, stdev=22380.40 00:24:43.796 clat percentiles (msec): 00:24:43.796 | 1.00th=[ 13], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 00:24:43.796 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 78], 00:24:43.796 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 114], 00:24:43.796 | 99.00th=[ 132], 99.50th=[ 140], 99.90th=[ 157], 99.95th=[ 157], 00:24:43.796 | 99.99th=[ 157] 00:24:43.796 bw ( KiB/s): min= 528, max= 1024, per=4.11%, avg=841.26, stdev=138.29, samples=19 00:24:43.796 iops : min= 132, max= 256, avg=210.32, stdev=34.57, samples=19 00:24:43.796 lat (usec) : 500=0.14% 00:24:43.796 lat (msec) : 2=0.32%, 4=0.46%, 20=0.55%, 50=13.10%, 100=71.49% 00:24:43.796 lat (msec) : 250=13.93% 00:24:43.796 cpu : usr=33.21%, sys=2.06%, ctx=1236, majf=0, minf=9 00:24:43.796 IO depths : 1=0.1%, 2=1.8%, 4=7.0%, 8=76.3%, 16=14.9%, 32=0.0%, >=64=0.0% 00:24:43.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 complete : 0=0.0%, 4=88.8%, 8=9.7%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 issued rwts: total=2168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.796 filename1: (groupid=0, jobs=1): err= 0: pid=98687: Fri Nov 8 02:28:43 2024 00:24:43.796 read: IOPS=199, BW=797KiB/s (816kB/s)(8004KiB/10042msec) 00:24:43.796 slat (usec): min=7, max=8024, avg=17.93, stdev=179.13 00:24:43.796 clat (msec): min=31, max=165, avg=80.11, stdev=21.61 00:24:43.796 lat (msec): min=31, max=165, avg=80.13, stdev=21.62 00:24:43.796 clat percentiles (msec): 00:24:43.796 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 61], 00:24:43.796 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:24:43.796 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:24:43.796 | 99.00th=[ 144], 99.50th=[ 159], 99.90th=[ 163], 99.95th=[ 165], 00:24:43.796 | 99.99th=[ 165] 00:24:43.796 bw ( KiB/s): min= 528, max= 1000, per=3.89%, avg=796.15, stdev=119.76, samples=20 00:24:43.796 iops : min= 132, max= 250, avg=199.00, stdev=29.94, samples=20 00:24:43.796 lat (msec) : 50=9.25%, 100=76.71%, 250=14.04% 00:24:43.796 cpu : usr=31.21%, sys=1.86%, ctx=881, majf=0, minf=9 00:24:43.796 IO depths : 1=0.1%, 2=2.0%, 4=8.2%, 8=74.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:43.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 complete : 0=0.0%, 4=89.9%, 8=8.3%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 issued rwts: total=2001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.796 filename1: (groupid=0, jobs=1): err= 0: pid=98688: Fri Nov 8 02:28:43 2024 00:24:43.796 read: IOPS=193, BW=773KiB/s (791kB/s)(7756KiB/10036msec) 00:24:43.796 slat (usec): min=4, max=10025, avg=31.11, stdev=363.94 00:24:43.796 clat (msec): min=3, max=155, avg=82.48, stdev=25.75 00:24:43.796 lat (msec): min=3, max=155, avg=82.51, stdev=25.75 00:24:43.796 clat percentiles (msec): 00:24:43.796 | 1.00th=[ 6], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 68], 00:24:43.796 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 83], 00:24:43.796 | 70.00th=[ 95], 80.00th=[ 105], 90.00th=[ 116], 95.00th=[ 136], 00:24:43.796 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 157], 00:24:43.796 | 99.99th=[ 157] 00:24:43.796 bw ( KiB/s): min= 400, max= 1272, per=3.77%, avg=771.20, stdev=185.84, samples=20 00:24:43.796 iops : min= 100, max= 318, avg=192.80, stdev=46.46, samples=20 00:24:43.796 lat (msec) : 4=0.41%, 10=2.06%, 50=5.00%, 100=67.66%, 250=24.86% 00:24:43.796 cpu : usr=42.75%, sys=2.38%, ctx=1351, majf=0, minf=9 00:24:43.796 IO depths : 1=0.2%, 2=4.7%, 4=18.1%, 8=63.4%, 16=13.6%, 32=0.0%, >=64=0.0% 00:24:43.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 complete : 0=0.0%, 4=92.5%, 8=3.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.796 issued rwts: total=1939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.796 filename1: (groupid=0, jobs=1): err= 0: pid=98689: Fri Nov 8 02:28:43 2024 00:24:43.796 read: IOPS=215, BW=863KiB/s (883kB/s)(8660KiB/10038msec) 00:24:43.796 slat (usec): min=8, max=11032, avg=23.54, stdev=292.73 00:24:43.796 clat (msec): min=23, max=168, avg=74.01, stdev=20.86 00:24:43.796 lat (msec): min=23, max=168, avg=74.04, stdev=20.87 00:24:43.796 clat percentiles (msec): 00:24:43.796 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 00:24:43.796 | 30.00th=[ 63], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 78], 00:24:43.797 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 111], 00:24:43.797 | 99.00th=[ 130], 99.50th=[ 142], 99.90th=[ 159], 99.95th=[ 159], 00:24:43.797 | 99.99th=[ 169] 00:24:43.797 bw ( KiB/s): min= 584, max= 1128, per=4.21%, avg=861.75, stdev=111.76, samples=20 00:24:43.797 iops : min= 146, max= 282, avg=215.40, stdev=27.96, samples=20 00:24:43.797 lat (msec) : 50=14.41%, 100=72.52%, 250=13.07% 00:24:43.797 cpu : usr=43.65%, sys=2.36%, ctx=1286, majf=0, minf=9 00:24:43.797 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.4%, 16=16.5%, 32=0.0%, >=64=0.0% 00:24:43.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.797 filename1: (groupid=0, jobs=1): err= 0: pid=98690: Fri Nov 8 02:28:43 2024 00:24:43.797 read: IOPS=213, BW=854KiB/s (874kB/s)(8560KiB/10027msec) 00:24:43.797 slat (usec): min=3, max=8027, avg=26.15, stdev=299.84 00:24:43.797 clat (msec): min=35, max=165, avg=74.77, stdev=20.95 00:24:43.797 lat (msec): min=35, max=165, avg=74.79, stdev=20.95 00:24:43.797 clat percentiles (msec): 00:24:43.797 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 57], 00:24:43.797 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:24:43.797 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 111], 00:24:43.797 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 167], 99.95th=[ 167], 00:24:43.797 | 99.99th=[ 167] 00:24:43.797 bw ( KiB/s): min= 560, max= 976, per=4.15%, avg=849.60, stdev=124.17, samples=20 00:24:43.797 iops : min= 140, max= 244, avg=212.40, stdev=31.04, samples=20 00:24:43.797 lat (msec) : 50=15.93%, 100=71.82%, 250=12.24% 00:24:43.797 cpu : usr=33.56%, sys=1.93%, ctx=941, majf=0, minf=9 00:24:43.797 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:43.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 complete : 0=0.0%, 4=88.0%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 issued rwts: total=2140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.797 filename1: (groupid=0, jobs=1): err= 0: pid=98691: Fri Nov 8 02:28:43 2024 00:24:43.797 read: IOPS=212, BW=851KiB/s (872kB/s)(8536KiB/10025msec) 00:24:43.797 slat (usec): min=7, max=4024, avg=18.06, stdev=97.37 00:24:43.797 clat (msec): min=37, max=164, avg=75.02, stdev=19.45 00:24:43.797 lat (msec): min=37, max=164, avg=75.03, stdev=19.45 00:24:43.797 clat percentiles (msec): 00:24:43.797 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 57], 00:24:43.797 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 79], 00:24:43.797 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 104], 95.00th=[ 111], 00:24:43.797 | 99.00th=[ 131], 99.50th=[ 138], 99.90th=[ 165], 99.95th=[ 165], 00:24:43.797 | 99.99th=[ 165] 00:24:43.797 bw ( KiB/s): min= 560, max= 976, per=4.15%, avg=849.25, stdev=116.84, samples=20 00:24:43.797 iops : min= 140, max= 244, avg=212.30, stdev=29.22, samples=20 00:24:43.797 lat (msec) : 50=11.48%, 100=76.99%, 250=11.53% 00:24:43.797 cpu : usr=41.66%, sys=2.08%, ctx=1257, majf=0, minf=9 00:24:43.797 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=79.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:43.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 complete : 0=0.0%, 4=88.3%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 issued rwts: total=2134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.797 filename1: (groupid=0, jobs=1): err= 0: pid=98692: Fri Nov 8 02:28:43 2024 00:24:43.797 read: IOPS=205, BW=821KiB/s (840kB/s)(8236KiB/10036msec) 00:24:43.797 slat (usec): min=6, max=8036, avg=25.83, stdev=305.83 00:24:43.797 clat (msec): min=35, max=180, avg=77.78, stdev=22.12 00:24:43.797 lat (msec): min=35, max=180, avg=77.81, stdev=22.12 00:24:43.797 clat percentiles (msec): 00:24:43.797 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:24:43.797 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 80], 00:24:43.797 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 112], 00:24:43.797 | 99.00th=[ 165], 99.50th=[ 167], 99.90th=[ 167], 99.95th=[ 180], 00:24:43.797 | 99.99th=[ 180] 00:24:43.797 bw ( KiB/s): min= 400, max= 1008, per=3.99%, avg=816.95, stdev=145.73, samples=20 00:24:43.797 iops : min= 100, max= 252, avg=204.20, stdev=36.45, samples=20 00:24:43.797 lat (msec) : 50=12.82%, 100=71.83%, 250=15.35% 00:24:43.797 cpu : usr=36.22%, sys=2.14%, ctx=1035, majf=0, minf=9 00:24:43.797 IO depths : 1=0.1%, 2=2.6%, 4=10.2%, 8=72.2%, 16=14.9%, 32=0.0%, >=64=0.0% 00:24:43.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 complete : 0=0.0%, 4=90.2%, 8=7.6%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 issued rwts: total=2059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.797 filename1: (groupid=0, jobs=1): err= 0: pid=98693: Fri Nov 8 02:28:43 2024 00:24:43.797 read: IOPS=216, BW=867KiB/s (888kB/s)(8696KiB/10026msec) 00:24:43.797 slat (usec): min=3, max=8026, avg=28.26, stdev=265.67 00:24:43.797 clat (msec): min=26, max=163, avg=73.60, stdev=20.43 00:24:43.797 lat (msec): min=26, max=163, avg=73.63, stdev=20.43 00:24:43.797 clat percentiles (msec): 00:24:43.797 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:24:43.797 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:24:43.797 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 101], 95.00th=[ 110], 00:24:43.797 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 165], 99.95th=[ 165], 00:24:43.797 | 99.99th=[ 165] 00:24:43.797 bw ( KiB/s): min= 592, max= 1024, per=4.23%, avg=865.90, stdev=121.46, samples=20 00:24:43.797 iops : min= 148, max= 256, avg=216.45, stdev=30.42, samples=20 00:24:43.797 lat (msec) : 50=14.63%, 100=75.11%, 250=10.26% 00:24:43.797 cpu : usr=37.04%, sys=2.23%, ctx=1091, majf=0, minf=9 00:24:43.797 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:43.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 issued rwts: total=2174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.797 filename1: (groupid=0, jobs=1): err= 0: pid=98694: Fri Nov 8 02:28:43 2024 00:24:43.797 read: IOPS=215, BW=862KiB/s (883kB/s)(8656KiB/10040msec) 00:24:43.797 slat (nsec): min=4203, max=75973, avg=13395.51, stdev=5271.45 00:24:43.797 clat (msec): min=2, max=167, avg=74.07, stdev=23.66 00:24:43.797 lat (msec): min=2, max=167, avg=74.09, stdev=23.66 00:24:43.797 clat percentiles (msec): 00:24:43.797 | 1.00th=[ 5], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 56], 00:24:43.797 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 78], 00:24:43.797 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 113], 00:24:43.797 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 167], 99.95th=[ 167], 00:24:43.797 | 99.99th=[ 167] 00:24:43.797 bw ( KiB/s): min= 584, max= 1258, per=4.21%, avg=861.30, stdev=147.28, samples=20 00:24:43.797 iops : min= 146, max= 314, avg=215.30, stdev=36.75, samples=20 00:24:43.797 lat (msec) : 4=0.74%, 10=1.48%, 50=12.01%, 100=70.79%, 250=14.97% 00:24:43.797 cpu : usr=33.21%, sys=1.90%, ctx=1246, majf=0, minf=9 00:24:43.797 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=80.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:43.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.797 filename2: (groupid=0, jobs=1): err= 0: pid=98695: Fri Nov 8 02:28:43 2024 00:24:43.797 read: IOPS=212, BW=851KiB/s (872kB/s)(8528KiB/10018msec) 00:24:43.797 slat (usec): min=4, max=8040, avg=39.32, stdev=376.04 00:24:43.797 clat (msec): min=16, max=178, avg=74.94, stdev=21.84 00:24:43.797 lat (msec): min=16, max=178, avg=74.98, stdev=21.84 00:24:43.797 clat percentiles (msec): 00:24:43.797 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:24:43.797 | 30.00th=[ 65], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:24:43.797 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 113], 00:24:43.797 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 159], 99.95th=[ 180], 00:24:43.797 | 99.99th=[ 180] 00:24:43.797 bw ( KiB/s): min= 560, max= 1008, per=4.09%, avg=837.05, stdev=136.68, samples=19 00:24:43.797 iops : min= 140, max= 252, avg=209.26, stdev=34.17, samples=19 00:24:43.797 lat (msec) : 20=0.42%, 50=13.41%, 100=71.58%, 250=14.59% 00:24:43.797 cpu : usr=38.73%, sys=2.31%, ctx=1214, majf=0, minf=9 00:24:43.797 IO depths : 1=0.1%, 2=1.9%, 4=7.6%, 8=75.6%, 16=14.8%, 32=0.0%, >=64=0.0% 00:24:43.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 complete : 0=0.0%, 4=89.0%, 8=9.3%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 issued rwts: total=2132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.797 filename2: (groupid=0, jobs=1): err= 0: pid=98696: Fri Nov 8 02:28:43 2024 00:24:43.797 read: IOPS=210, BW=841KiB/s (861kB/s)(8436KiB/10035msec) 00:24:43.797 slat (usec): min=3, max=8024, avg=17.13, stdev=174.51 00:24:43.797 clat (msec): min=24, max=157, avg=76.04, stdev=20.37 00:24:43.797 lat (msec): min=24, max=157, avg=76.06, stdev=20.37 00:24:43.797 clat percentiles (msec): 00:24:43.797 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:24:43.797 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:24:43.797 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 110], 00:24:43.797 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 159], 99.95th=[ 159], 00:24:43.797 | 99.99th=[ 159] 00:24:43.797 bw ( KiB/s): min= 568, max= 1096, per=4.09%, avg=837.20, stdev=112.22, samples=20 00:24:43.797 iops : min= 142, max= 274, avg=209.30, stdev=28.05, samples=20 00:24:43.797 lat (msec) : 50=13.18%, 100=74.16%, 250=12.66% 00:24:43.797 cpu : usr=31.42%, sys=1.77%, ctx=850, majf=0, minf=9 00:24:43.797 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.3%, 16=17.0%, 32=0.0%, >=64=0.0% 00:24:43.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 complete : 0=0.0%, 4=87.9%, 8=11.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.797 issued rwts: total=2109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.797 filename2: (groupid=0, jobs=1): err= 0: pid=98697: Fri Nov 8 02:28:43 2024 00:24:43.797 read: IOPS=208, BW=832KiB/s (852kB/s)(8344KiB/10027msec) 00:24:43.797 slat (usec): min=8, max=8026, avg=25.36, stdev=232.09 00:24:43.798 clat (msec): min=25, max=164, avg=76.71, stdev=22.19 00:24:43.798 lat (msec): min=25, max=164, avg=76.73, stdev=22.18 00:24:43.798 clat percentiles (msec): 00:24:43.798 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 57], 00:24:43.798 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 78], 00:24:43.798 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 115], 00:24:43.798 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 165], 99.95th=[ 165], 00:24:43.798 | 99.99th=[ 165] 00:24:43.798 bw ( KiB/s): min= 520, max= 1048, per=4.05%, avg=828.05, stdev=137.38, samples=20 00:24:43.798 iops : min= 130, max= 262, avg=207.00, stdev=34.36, samples=20 00:24:43.798 lat (msec) : 50=11.79%, 100=70.42%, 250=17.79% 00:24:43.798 cpu : usr=40.93%, sys=2.68%, ctx=1256, majf=0, minf=9 00:24:43.798 IO depths : 1=0.1%, 2=2.1%, 4=8.0%, 8=75.0%, 16=14.9%, 32=0.0%, >=64=0.0% 00:24:43.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.798 complete : 0=0.0%, 4=89.2%, 8=9.0%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.798 issued rwts: total=2086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.798 filename2: (groupid=0, jobs=1): err= 0: pid=98698: Fri Nov 8 02:28:43 2024 00:24:43.798 read: IOPS=210, BW=844KiB/s (864kB/s)(8468KiB/10035msec) 00:24:43.798 slat (usec): min=4, max=4035, avg=17.38, stdev=91.40 00:24:43.798 clat (msec): min=33, max=157, avg=75.68, stdev=19.66 00:24:43.798 lat (msec): min=33, max=157, avg=75.70, stdev=19.66 00:24:43.798 clat percentiles (msec): 00:24:43.798 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 58], 00:24:43.798 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 79], 00:24:43.798 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 110], 00:24:43.798 | 99.00th=[ 129], 99.50th=[ 140], 99.90th=[ 159], 99.95th=[ 159], 00:24:43.798 | 99.99th=[ 159] 00:24:43.798 bw ( KiB/s): min= 616, max= 1010, per=4.11%, avg=842.60, stdev=100.37, samples=20 00:24:43.798 iops : min= 154, max= 252, avg=210.60, stdev=25.05, samples=20 00:24:43.798 lat (msec) : 50=9.64%, 100=77.89%, 250=12.47% 00:24:43.798 cpu : usr=42.35%, sys=2.32%, ctx=1499, majf=0, minf=9 00:24:43.798 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=78.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:43.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.798 complete : 0=0.0%, 4=88.7%, 8=10.3%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.798 issued rwts: total=2117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.798 filename2: (groupid=0, jobs=1): err= 0: pid=98699: Fri Nov 8 02:28:43 2024 00:24:43.798 read: IOPS=216, BW=864KiB/s (885kB/s)(8668KiB/10028msec) 00:24:43.798 slat (usec): min=4, max=8018, avg=21.00, stdev=192.30 00:24:43.798 clat (msec): min=27, max=175, avg=73.90, stdev=20.96 00:24:43.798 lat (msec): min=27, max=175, avg=73.92, stdev=20.97 00:24:43.798 clat percentiles (msec): 00:24:43.798 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 55], 00:24:43.798 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:24:43.798 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 105], 95.00th=[ 114], 00:24:43.798 | 99.00th=[ 129], 99.50th=[ 148], 99.90th=[ 176], 99.95th=[ 176], 00:24:43.798 | 99.99th=[ 176] 00:24:43.798 bw ( KiB/s): min= 616, max= 1008, per=4.20%, avg=860.40, stdev=121.81, samples=20 00:24:43.798 iops : min= 154, max= 252, avg=215.10, stdev=30.45, samples=20 00:24:43.798 lat (msec) : 50=13.20%, 100=73.56%, 250=13.24% 00:24:43.798 cpu : usr=41.01%, sys=2.57%, ctx=1502, majf=0, minf=9 00:24:43.798 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=78.9%, 16=15.4%, 32=0.0%, >=64=0.0% 00:24:43.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.798 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.798 issued rwts: total=2167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.798 filename2: (groupid=0, jobs=1): err= 0: pid=98700: Fri Nov 8 02:28:43 2024 00:24:43.798 read: IOPS=213, BW=856KiB/s (876kB/s)(8580KiB/10025msec) 00:24:43.798 slat (usec): min=8, max=6449, avg=20.00, stdev=176.11 00:24:43.798 clat (msec): min=32, max=163, avg=74.61, stdev=20.82 00:24:43.798 lat (msec): min=32, max=163, avg=74.63, stdev=20.82 00:24:43.798 clat percentiles (msec): 00:24:43.798 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:24:43.798 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 78], 00:24:43.798 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 106], 95.00th=[ 113], 00:24:43.798 | 99.00th=[ 129], 99.50th=[ 144], 99.90th=[ 163], 99.95th=[ 163], 00:24:43.798 | 99.99th=[ 163] 00:24:43.798 bw ( KiB/s): min= 616, max= 1024, per=4.17%, avg=854.35, stdev=103.66, samples=20 00:24:43.798 iops : min= 154, max= 256, avg=213.55, stdev=25.98, samples=20 00:24:43.798 lat (msec) : 50=13.05%, 100=73.52%, 250=13.43% 00:24:43.798 cpu : usr=32.99%, sys=2.10%, ctx=1175, majf=0, minf=9 00:24:43.798 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:43.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.798 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.798 issued rwts: total=2145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.798 filename2: (groupid=0, jobs=1): err= 0: pid=98701: Fri Nov 8 02:28:43 2024 00:24:43.798 read: IOPS=205, BW=823KiB/s (842kB/s)(8244KiB/10023msec) 00:24:43.798 slat (usec): min=4, max=8023, avg=22.19, stdev=249.47 00:24:43.798 clat (msec): min=23, max=160, avg=77.65, stdev=23.47 00:24:43.798 lat (msec): min=23, max=160, avg=77.67, stdev=23.47 00:24:43.798 clat percentiles (msec): 00:24:43.798 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:24:43.798 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:24:43.798 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:24:43.798 | 99.00th=[ 144], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 161], 00:24:43.798 | 99.99th=[ 161] 00:24:43.798 bw ( KiB/s): min= 512, max= 1048, per=4.00%, avg=818.00, stdev=154.67, samples=20 00:24:43.798 iops : min= 128, max= 262, avg=204.50, stdev=38.67, samples=20 00:24:43.798 lat (msec) : 50=14.31%, 100=70.01%, 250=15.67% 00:24:43.798 cpu : usr=31.32%, sys=1.66%, ctx=857, majf=0, minf=9 00:24:43.798 IO depths : 1=0.1%, 2=2.4%, 4=9.5%, 8=73.5%, 16=14.7%, 32=0.0%, >=64=0.0% 00:24:43.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.798 complete : 0=0.0%, 4=89.6%, 8=8.3%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.798 issued rwts: total=2061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.798 filename2: (groupid=0, jobs=1): err= 0: pid=98702: Fri Nov 8 02:28:43 2024 00:24:43.798 read: IOPS=212, BW=848KiB/s (868kB/s)(8500KiB/10022msec) 00:24:43.798 slat (usec): min=3, max=8024, avg=18.60, stdev=173.84 00:24:43.798 clat (msec): min=22, max=157, avg=75.29, stdev=20.70 00:24:43.798 lat (msec): min=22, max=157, avg=75.30, stdev=20.70 00:24:43.798 clat percentiles (msec): 00:24:43.798 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:24:43.798 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:24:43.798 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 108], 00:24:43.798 | 99.00th=[ 127], 99.50th=[ 140], 99.90th=[ 159], 99.95th=[ 159], 00:24:43.798 | 99.99th=[ 159] 00:24:43.798 bw ( KiB/s): min= 608, max= 1024, per=4.13%, avg=846.00, stdev=121.04, samples=20 00:24:43.798 iops : min= 152, max= 256, avg=211.50, stdev=30.26, samples=20 00:24:43.798 lat (msec) : 50=16.80%, 100=70.26%, 250=12.94% 00:24:43.798 cpu : usr=31.28%, sys=1.96%, ctx=851, majf=0, minf=9 00:24:43.798 IO depths : 1=0.1%, 2=1.9%, 4=7.7%, 8=75.5%, 16=14.9%, 32=0.0%, >=64=0.0% 00:24:43.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.798 complete : 0=0.0%, 4=89.1%, 8=9.3%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:43.798 issued rwts: total=2125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:43.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:43.798 00:24:43.798 Run status group 0 (all jobs): 00:24:43.798 READ: bw=20.0MiB/s (21.0MB/s), 773KiB/s-901KiB/s (791kB/s-923kB/s), io=201MiB (210MB), run=10001-10042msec 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:43.798 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.799 bdev_null0 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.799 [2024-11-08 02:28:44.224567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.799 bdev_null1 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:43.799 { 00:24:43.799 "params": { 00:24:43.799 "name": "Nvme$subsystem", 00:24:43.799 "trtype": "$TEST_TRANSPORT", 00:24:43.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.799 "adrfam": "ipv4", 00:24:43.799 "trsvcid": "$NVMF_PORT", 00:24:43.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.799 "hdgst": ${hdgst:-false}, 00:24:43.799 "ddgst": ${ddgst:-false} 00:24:43.799 }, 00:24:43.799 "method": "bdev_nvme_attach_controller" 00:24:43.799 } 00:24:43.799 EOF 00:24:43.799 )") 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:43.799 { 00:24:43.799 "params": { 00:24:43.799 "name": "Nvme$subsystem", 00:24:43.799 "trtype": "$TEST_TRANSPORT", 00:24:43.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:43.799 "adrfam": "ipv4", 00:24:43.799 "trsvcid": "$NVMF_PORT", 00:24:43.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:43.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:43.799 "hdgst": ${hdgst:-false}, 00:24:43.799 "ddgst": ${ddgst:-false} 00:24:43.799 }, 00:24:43.799 "method": "bdev_nvme_attach_controller" 00:24:43.799 } 00:24:43.799 EOF 00:24:43.799 )") 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:43.799 "params": { 00:24:43.799 "name": "Nvme0", 00:24:43.799 "trtype": "tcp", 00:24:43.799 "traddr": "10.0.0.3", 00:24:43.799 "adrfam": "ipv4", 00:24:43.799 "trsvcid": "4420", 00:24:43.799 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:43.799 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:43.799 "hdgst": false, 00:24:43.799 "ddgst": false 00:24:43.799 }, 00:24:43.799 "method": "bdev_nvme_attach_controller" 00:24:43.799 },{ 00:24:43.799 "params": { 00:24:43.799 "name": "Nvme1", 00:24:43.799 "trtype": "tcp", 00:24:43.799 "traddr": "10.0.0.3", 00:24:43.799 "adrfam": "ipv4", 00:24:43.799 "trsvcid": "4420", 00:24:43.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:43.799 "hdgst": false, 00:24:43.799 "ddgst": false 00:24:43.799 }, 00:24:43.799 "method": "bdev_nvme_attach_controller" 00:24:43.799 }' 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.799 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:43.800 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:43.800 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:43.800 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:43.800 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:43.800 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:43.800 02:28:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:43.800 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:43.800 ... 00:24:43.800 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:43.800 ... 00:24:43.800 fio-3.35 00:24:43.800 Starting 4 threads 00:24:49.074 00:24:49.074 filename0: (groupid=0, jobs=1): err= 0: pid=98834: Fri Nov 8 02:28:49 2024 00:24:49.074 read: IOPS=2029, BW=15.9MiB/s (16.6MB/s)(79.3MiB/5001msec) 00:24:49.074 slat (nsec): min=3120, max=54610, avg=12568.28, stdev=4589.73 00:24:49.074 clat (usec): min=666, max=5567, avg=3891.31, stdev=509.40 00:24:49.074 lat (usec): min=673, max=5595, avg=3903.87, stdev=509.63 00:24:49.074 clat percentiles (usec): 00:24:49.074 | 1.00th=[ 1778], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3556], 00:24:49.074 | 30.00th=[ 3621], 40.00th=[ 3818], 50.00th=[ 4080], 60.00th=[ 4113], 00:24:49.074 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4490], 00:24:49.074 | 99.00th=[ 4752], 99.50th=[ 4883], 99.90th=[ 5473], 99.95th=[ 5538], 00:24:49.074 | 99.99th=[ 5538] 00:24:49.074 bw ( KiB/s): min=14592, max=18208, per=23.08%, avg=16348.44, stdev=1396.84, samples=9 00:24:49.074 iops : min= 1824, max= 2276, avg=2043.56, stdev=174.60, samples=9 00:24:49.074 lat (usec) : 750=0.16%, 1000=0.28% 00:24:49.074 lat (msec) : 2=1.07%, 4=42.39%, 10=56.10% 00:24:49.074 cpu : usr=90.58%, sys=8.64%, ctx=44, majf=0, minf=0 00:24:49.074 IO depths : 1=0.1%, 2=23.2%, 4=51.1%, 8=25.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:49.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.074 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.074 issued rwts: total=10148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:49.074 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:49.074 filename0: (groupid=0, jobs=1): err= 0: pid=98835: Fri Nov 8 02:28:49 2024 00:24:49.074 read: IOPS=2398, BW=18.7MiB/s (19.6MB/s)(93.7MiB/5001msec) 00:24:49.074 slat (usec): min=3, max=190, avg=13.65, stdev= 4.62 00:24:49.074 clat (usec): min=434, max=6741, avg=3295.25, stdev=809.67 00:24:49.074 lat (usec): min=445, max=6755, avg=3308.90, stdev=810.01 00:24:49.074 clat percentiles (usec): 00:24:49.074 | 1.00th=[ 1958], 5.00th=[ 2008], 10.00th=[ 2040], 20.00th=[ 2278], 00:24:49.074 | 30.00th=[ 2704], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3654], 00:24:49.074 | 70.00th=[ 3851], 80.00th=[ 4015], 90.00th=[ 4146], 95.00th=[ 4293], 00:24:49.074 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5407], 99.95th=[ 5473], 00:24:49.074 | 99.99th=[ 5538] 00:24:49.075 bw ( KiB/s): min=16896, max=21120, per=26.80%, avg=18984.89, stdev=1606.15, samples=9 00:24:49.075 iops : min= 2112, max= 2640, avg=2373.11, stdev=200.77, samples=9 00:24:49.075 lat (usec) : 500=0.01% 00:24:49.075 lat (msec) : 2=4.62%, 4=73.03%, 10=22.35% 00:24:49.075 cpu : usr=89.56%, sys=9.52%, ctx=25, majf=0, minf=0 00:24:49.075 IO depths : 1=0.1%, 2=9.2%, 4=58.7%, 8=32.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:49.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.075 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.075 issued rwts: total=11993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:49.075 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:49.075 filename1: (groupid=0, jobs=1): err= 0: pid=98836: Fri Nov 8 02:28:49 2024 00:24:49.075 read: IOPS=2370, BW=18.5MiB/s (19.4MB/s)(92.6MiB/5001msec) 00:24:49.075 slat (nsec): min=3185, max=56664, avg=14050.07, stdev=4242.70 00:24:49.075 clat (usec): min=1192, max=6434, avg=3333.14, stdev=810.78 00:24:49.075 lat (usec): min=1205, max=6449, avg=3347.19, stdev=810.29 00:24:49.075 clat percentiles (usec): 00:24:49.075 | 1.00th=[ 1958], 5.00th=[ 2008], 10.00th=[ 2040], 20.00th=[ 2278], 00:24:49.075 | 30.00th=[ 2737], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3687], 00:24:49.075 | 70.00th=[ 3916], 80.00th=[ 4047], 90.00th=[ 4146], 95.00th=[ 4293], 00:24:49.075 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5473], 99.95th=[ 5473], 00:24:49.075 | 99.99th=[ 5538] 00:24:49.075 bw ( KiB/s): min=16512, max=21120, per=26.45%, avg=18737.78, stdev=1803.58, samples=9 00:24:49.075 iops : min= 2064, max= 2640, avg=2342.22, stdev=225.45, samples=9 00:24:49.075 lat (msec) : 2=4.21%, 4=71.73%, 10=24.06% 00:24:49.075 cpu : usr=90.56%, sys=8.54%, ctx=8, majf=0, minf=0 00:24:49.075 IO depths : 1=0.1%, 2=10.1%, 4=58.2%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:49.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.075 complete : 0=0.0%, 4=96.2%, 8=3.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.075 issued rwts: total=11853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:49.075 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:49.075 filename1: (groupid=0, jobs=1): err= 0: pid=98837: Fri Nov 8 02:28:49 2024 00:24:49.075 read: IOPS=2062, BW=16.1MiB/s (16.9MB/s)(80.6MiB/5004msec) 00:24:49.075 slat (nsec): min=6221, max=48191, avg=12802.43, stdev=4422.49 00:24:49.075 clat (usec): min=893, max=6389, avg=3831.30, stdev=562.98 00:24:49.075 lat (usec): min=901, max=6403, avg=3844.10, stdev=563.48 00:24:49.075 clat percentiles (usec): 00:24:49.075 | 1.00th=[ 1729], 5.00th=[ 2540], 10.00th=[ 3458], 20.00th=[ 3523], 00:24:49.075 | 30.00th=[ 3589], 40.00th=[ 3720], 50.00th=[ 4047], 60.00th=[ 4113], 00:24:49.075 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4424], 00:24:49.075 | 99.00th=[ 4686], 99.50th=[ 4752], 99.90th=[ 4883], 99.95th=[ 4883], 00:24:49.075 | 99.99th=[ 4948] 00:24:49.075 bw ( KiB/s): min=14720, max=19296, per=23.29%, avg=16499.20, stdev=1667.57, samples=10 00:24:49.075 iops : min= 1840, max= 2412, avg=2062.40, stdev=208.45, samples=10 00:24:49.075 lat (usec) : 1000=0.12% 00:24:49.075 lat (msec) : 2=1.69%, 4=45.32%, 10=52.88% 00:24:49.075 cpu : usr=91.53%, sys=7.68%, ctx=55, majf=0, minf=0 00:24:49.075 IO depths : 1=0.1%, 2=22.1%, 4=51.8%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:49.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.075 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.075 issued rwts: total=10320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:49.075 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:49.075 00:24:49.075 Run status group 0 (all jobs): 00:24:49.075 READ: bw=69.2MiB/s (72.5MB/s), 15.9MiB/s-18.7MiB/s (16.6MB/s-19.6MB/s), io=346MiB (363MB), run=5001-5004msec 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.075 ************************************ 00:24:49.075 END TEST fio_dif_rand_params 00:24:49.075 ************************************ 00:24:49.075 00:24:49.075 real 0m23.001s 00:24:49.075 user 2m2.622s 00:24:49.075 sys 0m8.886s 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:49.075 02:28:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:49.075 02:28:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:49.075 02:28:50 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:49.075 02:28:50 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:49.075 02:28:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:49.075 ************************************ 00:24:49.075 START TEST fio_dif_digest 00:24:49.075 ************************************ 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:49.075 bdev_null0 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:49.075 [2024-11-08 02:28:50.224977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:49.075 02:28:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:49.075 { 00:24:49.075 "params": { 00:24:49.075 "name": "Nvme$subsystem", 00:24:49.075 "trtype": "$TEST_TRANSPORT", 00:24:49.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:49.075 "adrfam": "ipv4", 00:24:49.075 "trsvcid": "$NVMF_PORT", 00:24:49.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:49.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:49.076 "hdgst": ${hdgst:-false}, 00:24:49.076 "ddgst": ${ddgst:-false} 00:24:49.076 }, 00:24:49.076 "method": "bdev_nvme_attach_controller" 00:24:49.076 } 00:24:49.076 EOF 00:24:49.076 )") 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:49.076 "params": { 00:24:49.076 "name": "Nvme0", 00:24:49.076 "trtype": "tcp", 00:24:49.076 "traddr": "10.0.0.3", 00:24:49.076 "adrfam": "ipv4", 00:24:49.076 "trsvcid": "4420", 00:24:49.076 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:49.076 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:49.076 "hdgst": true, 00:24:49.076 "ddgst": true 00:24:49.076 }, 00:24:49.076 "method": "bdev_nvme_attach_controller" 00:24:49.076 }' 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:49.076 02:28:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:49.076 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:49.076 ... 00:24:49.076 fio-3.35 00:24:49.076 Starting 3 threads 00:24:59.055 00:24:59.055 filename0: (groupid=0, jobs=1): err= 0: pid=98943: Fri Nov 8 02:29:00 2024 00:24:59.055 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(318MiB/10006msec) 00:24:59.055 slat (nsec): min=6762, max=38730, avg=9343.89, stdev=3463.36 00:24:59.055 clat (usec): min=6567, max=13765, avg=11789.55, stdev=478.49 00:24:59.055 lat (usec): min=6574, max=13777, avg=11798.89, stdev=478.79 00:24:59.055 clat percentiles (usec): 00:24:59.055 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:24:59.055 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:24:59.055 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12780], 00:24:59.055 | 99.00th=[13435], 99.50th=[13698], 99.90th=[13698], 99.95th=[13698], 00:24:59.055 | 99.99th=[13829] 00:24:59.055 bw ( KiB/s): min=31488, max=33024, per=33.37%, avg=32538.95, stdev=458.70, samples=19 00:24:59.055 iops : min= 246, max= 258, avg=254.21, stdev= 3.58, samples=19 00:24:59.055 lat (msec) : 10=0.12%, 20=99.88% 00:24:59.055 cpu : usr=90.99%, sys=8.49%, ctx=15, majf=0, minf=9 00:24:59.055 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:59.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.055 issued rwts: total=2541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.055 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:59.055 filename0: (groupid=0, jobs=1): err= 0: pid=98944: Fri Nov 8 02:29:00 2024 00:24:59.055 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(318MiB/10007msec) 00:24:59.055 slat (nsec): min=6707, max=43732, avg=9428.96, stdev=3615.80 00:24:59.055 clat (usec): min=7999, max=14611, avg=11791.42, stdev=486.31 00:24:59.055 lat (usec): min=8006, max=14637, avg=11800.85, stdev=486.70 00:24:59.055 clat percentiles (usec): 00:24:59.055 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:24:59.055 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:24:59.055 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12780], 00:24:59.055 | 99.00th=[13435], 99.50th=[13698], 99.90th=[14615], 99.95th=[14615], 00:24:59.055 | 99.99th=[14615] 00:24:59.055 bw ( KiB/s): min=31488, max=33792, per=33.33%, avg=32498.53, stdev=629.81, samples=19 00:24:59.055 iops : min= 246, max= 264, avg=253.89, stdev= 4.92, samples=19 00:24:59.055 lat (msec) : 10=0.24%, 20=99.76% 00:24:59.055 cpu : usr=91.03%, sys=8.48%, ctx=19, majf=0, minf=0 00:24:59.055 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:59.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.055 issued rwts: total=2541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.055 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:59.055 filename0: (groupid=0, jobs=1): err= 0: pid=98945: Fri Nov 8 02:29:00 2024 00:24:59.055 read: IOPS=253, BW=31.7MiB/s (33.3MB/s)(318MiB/10007msec) 00:24:59.055 slat (nsec): min=6775, max=45416, avg=9334.70, stdev=3539.64 00:24:59.055 clat (usec): min=8097, max=13861, avg=11791.59, stdev=469.35 00:24:59.055 lat (usec): min=8104, max=13873, avg=11800.92, stdev=469.66 00:24:59.055 clat percentiles (usec): 00:24:59.055 | 1.00th=[11338], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:24:59.055 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:24:59.055 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12780], 00:24:59.055 | 99.00th=[13566], 99.50th=[13698], 99.90th=[13829], 99.95th=[13829], 00:24:59.055 | 99.99th=[13829] 00:24:59.055 bw ( KiB/s): min=31488, max=33024, per=33.33%, avg=32498.53, stdev=447.28, samples=19 00:24:59.055 iops : min= 246, max= 258, avg=253.89, stdev= 3.49, samples=19 00:24:59.055 lat (msec) : 10=0.12%, 20=99.88% 00:24:59.055 cpu : usr=90.90%, sys=8.61%, ctx=9, majf=0, minf=0 00:24:59.055 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:59.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:59.055 issued rwts: total=2541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:59.055 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:59.055 00:24:59.055 Run status group 0 (all jobs): 00:24:59.055 READ: bw=95.2MiB/s (99.8MB/s), 31.7MiB/s-31.7MiB/s (33.3MB/s-33.3MB/s), io=953MiB (999MB), run=10006-10007msec 00:24:59.314 02:29:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:59.314 02:29:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:59.314 02:29:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:59.314 02:29:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:59.314 02:29:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:59.314 02:29:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:59.314 02:29:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.314 02:29:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:59.314 02:29:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.314 02:29:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:59.314 02:29:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.314 02:29:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:59.314 ************************************ 00:24:59.314 END TEST fio_dif_digest 00:24:59.314 ************************************ 00:24:59.315 02:29:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.315 00:24:59.315 real 0m10.873s 00:24:59.315 user 0m27.880s 00:24:59.315 sys 0m2.786s 00:24:59.315 02:29:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:59.315 02:29:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:59.315 02:29:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:59.315 02:29:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:59.315 02:29:01 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:59.315 02:29:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:24:59.315 02:29:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:59.315 02:29:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:24:59.315 02:29:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:59.315 02:29:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:59.315 rmmod nvme_tcp 00:24:59.315 rmmod nvme_fabrics 00:24:59.574 rmmod nvme_keyring 00:24:59.574 02:29:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:59.574 02:29:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:24:59.574 02:29:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:24:59.574 02:29:01 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 98206 ']' 00:24:59.574 02:29:01 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 98206 00:24:59.574 02:29:01 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 98206 ']' 00:24:59.574 02:29:01 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 98206 00:24:59.574 02:29:01 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:24:59.574 02:29:01 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:59.574 02:29:01 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98206 00:24:59.574 killing process with pid 98206 00:24:59.574 02:29:01 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:59.574 02:29:01 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:59.574 02:29:01 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98206' 00:24:59.574 02:29:01 nvmf_dif -- common/autotest_common.sh@969 -- # kill 98206 00:24:59.574 02:29:01 nvmf_dif -- common/autotest_common.sh@974 -- # wait 98206 00:24:59.574 02:29:01 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:24:59.574 02:29:01 nvmf_dif -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:59.832 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:00.092 Waiting for block devices as requested 00:25:00.092 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:00.092 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:00.092 02:29:01 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:00.092 02:29:01 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:00.092 02:29:01 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:25:00.092 02:29:01 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:25:00.092 02:29:01 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:00.092 02:29:01 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:25:00.092 02:29:01 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.092 02:29:01 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:00.092 02:29:01 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:00.092 02:29:01 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:00.092 02:29:01 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:00.092 02:29:01 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:00.365 02:29:01 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:00.365 02:29:01 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:00.365 02:29:01 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:00.365 02:29:01 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:00.365 02:29:02 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:00.365 02:29:02 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:00.365 02:29:02 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:00.365 02:29:02 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:00.365 02:29:02 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:00.365 02:29:02 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:00.365 02:29:02 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.365 02:29:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:00.365 02:29:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.365 02:29:02 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:25:00.365 00:25:00.365 real 0m58.461s 00:25:00.365 user 3m44.807s 00:25:00.365 sys 0m20.334s 00:25:00.365 ************************************ 00:25:00.365 END TEST nvmf_dif 00:25:00.365 ************************************ 00:25:00.365 02:29:02 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:00.365 02:29:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:00.365 02:29:02 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:00.365 02:29:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:00.365 02:29:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:00.365 02:29:02 -- common/autotest_common.sh@10 -- # set +x 00:25:00.365 ************************************ 00:25:00.365 START TEST nvmf_abort_qd_sizes 00:25:00.365 ************************************ 00:25:00.365 02:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:00.696 * Looking for test storage... 00:25:00.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:00.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.696 --rc genhtml_branch_coverage=1 00:25:00.696 --rc genhtml_function_coverage=1 00:25:00.696 --rc genhtml_legend=1 00:25:00.696 --rc geninfo_all_blocks=1 00:25:00.696 --rc geninfo_unexecuted_blocks=1 00:25:00.696 00:25:00.696 ' 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:00.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.696 --rc genhtml_branch_coverage=1 00:25:00.696 --rc genhtml_function_coverage=1 00:25:00.696 --rc genhtml_legend=1 00:25:00.696 --rc geninfo_all_blocks=1 00:25:00.696 --rc geninfo_unexecuted_blocks=1 00:25:00.696 00:25:00.696 ' 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:00.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.696 --rc genhtml_branch_coverage=1 00:25:00.696 --rc genhtml_function_coverage=1 00:25:00.696 --rc genhtml_legend=1 00:25:00.696 --rc geninfo_all_blocks=1 00:25:00.696 --rc geninfo_unexecuted_blocks=1 00:25:00.696 00:25:00.696 ' 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:00.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.696 --rc genhtml_branch_coverage=1 00:25:00.696 --rc genhtml_function_coverage=1 00:25:00.696 --rc genhtml_legend=1 00:25:00.696 --rc geninfo_all_blocks=1 00:25:00.696 --rc geninfo_unexecuted_blocks=1 00:25:00.696 00:25:00.696 ' 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:00.696 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@456 -- # nvmf_veth_init 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:00.696 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:00.697 Cannot find device "nvmf_init_br" 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:00.697 Cannot find device "nvmf_init_br2" 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:00.697 Cannot find device "nvmf_tgt_br" 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:00.697 Cannot find device "nvmf_tgt_br2" 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:00.697 Cannot find device "nvmf_init_br" 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:00.697 Cannot find device "nvmf_init_br2" 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:00.697 Cannot find device "nvmf_tgt_br" 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:25:00.697 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:00.697 Cannot find device "nvmf_tgt_br2" 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:00.955 Cannot find device "nvmf_br" 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:00.955 Cannot find device "nvmf_init_if" 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:00.955 Cannot find device "nvmf_init_if2" 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:00.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:00.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:00.955 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:00.955 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:25:00.955 00:25:00.955 --- 10.0.0.3 ping statistics --- 00:25:00.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.955 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:00.955 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:00.955 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:25:00.955 00:25:00.955 --- 10.0.0.4 ping statistics --- 00:25:00.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.955 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:00.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:25:00.955 00:25:00.955 --- 10.0.0.1 ping statistics --- 00:25:00.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.955 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:00.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:25:00.955 00:25:00.955 --- 10.0.0.2 ping statistics --- 00:25:00.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.955 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # return 0 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:25:00.955 02:29:02 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:01.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:01.893 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:01.893 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=99589 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 99589 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:25:01.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 99589 ']' 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:01.893 02:29:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:02.152 [2024-11-08 02:29:03.776989] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:02.152 [2024-11-08 02:29:03.777285] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.152 [2024-11-08 02:29:03.919321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:02.152 [2024-11-08 02:29:03.965579] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.152 [2024-11-08 02:29:03.965883] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.152 [2024-11-08 02:29:03.966051] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.152 [2024-11-08 02:29:03.966200] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.152 [2024-11-08 02:29:03.966216] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.152 [2024-11-08 02:29:03.966378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.152 [2024-11-08 02:29:03.966668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.152 [2024-11-08 02:29:03.967064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.152 [2024-11-08 02:29:03.967074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.152 [2024-11-08 02:29:04.004763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:02.412 02:29:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:02.412 ************************************ 00:25:02.412 START TEST spdk_target_abort 00:25:02.413 ************************************ 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:02.413 spdk_targetn1 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:02.413 [2024-11-08 02:29:04.222022] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:02.413 [2024-11-08 02:29:04.250541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:02.413 02:29:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:05.702 Initializing NVMe Controllers 00:25:05.702 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:05.702 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:05.702 Initialization complete. Launching workers. 00:25:05.702 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9909, failed: 0 00:25:05.702 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1041, failed to submit 8868 00:25:05.702 success 801, unsuccessful 240, failed 0 00:25:05.702 02:29:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:05.702 02:29:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:08.991 Initializing NVMe Controllers 00:25:08.992 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:08.992 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:08.992 Initialization complete. Launching workers. 00:25:08.992 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8928, failed: 0 00:25:08.992 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1159, failed to submit 7769 00:25:08.992 success 349, unsuccessful 810, failed 0 00:25:08.992 02:29:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:08.992 02:29:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:12.280 Initializing NVMe Controllers 00:25:12.280 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:25:12.280 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:12.280 Initialization complete. Launching workers. 00:25:12.280 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31406, failed: 0 00:25:12.280 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2210, failed to submit 29196 00:25:12.280 success 416, unsuccessful 1794, failed 0 00:25:12.280 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:25:12.280 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.280 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:12.280 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.280 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:25:12.280 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.280 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:12.540 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.540 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99589 00:25:12.540 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 99589 ']' 00:25:12.540 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 99589 00:25:12.540 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:25:12.540 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:12.540 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99589 00:25:12.540 killing process with pid 99589 00:25:12.540 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:12.540 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:12.540 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99589' 00:25:12.540 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 99589 00:25:12.540 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 99589 00:25:12.799 ************************************ 00:25:12.799 END TEST spdk_target_abort 00:25:12.799 ************************************ 00:25:12.799 00:25:12.799 real 0m10.383s 00:25:12.799 user 0m39.941s 00:25:12.799 sys 0m1.941s 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:12.799 02:29:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:25:12.799 02:29:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:12.799 02:29:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:12.799 02:29:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:12.799 ************************************ 00:25:12.799 START TEST kernel_target_abort 00:25:12.799 ************************************ 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:12.799 02:29:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:13.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:13.317 Waiting for block devices as requested 00:25:13.317 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:13.317 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:13.317 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:25:13.317 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:13.317 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:25:13.317 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:13.317 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:13.317 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:13.317 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:25:13.317 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:13.317 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:13.576 No valid GPT data, bailing 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:13.576 No valid GPT data, bailing 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:13.576 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:13.577 No valid GPT data, bailing 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:13.577 No valid GPT data, bailing 00:25:13.577 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 --hostid=29f72880-00cc-41cd-b50e-5c2a72cc9156 -a 10.0.0.1 -t tcp -s 4420 00:25:13.836 00:25:13.836 Discovery Log Number of Records 2, Generation counter 2 00:25:13.836 =====Discovery Log Entry 0====== 00:25:13.836 trtype: tcp 00:25:13.836 adrfam: ipv4 00:25:13.836 subtype: current discovery subsystem 00:25:13.836 treq: not specified, sq flow control disable supported 00:25:13.836 portid: 1 00:25:13.836 trsvcid: 4420 00:25:13.836 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:13.836 traddr: 10.0.0.1 00:25:13.836 eflags: none 00:25:13.836 sectype: none 00:25:13.836 =====Discovery Log Entry 1====== 00:25:13.836 trtype: tcp 00:25:13.836 adrfam: ipv4 00:25:13.836 subtype: nvme subsystem 00:25:13.836 treq: not specified, sq flow control disable supported 00:25:13.836 portid: 1 00:25:13.836 trsvcid: 4420 00:25:13.836 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:13.836 traddr: 10.0.0.1 00:25:13.836 eflags: none 00:25:13.836 sectype: none 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:13.836 02:29:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:17.126 Initializing NVMe Controllers 00:25:17.126 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:17.126 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:17.126 Initialization complete. Launching workers. 00:25:17.126 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31985, failed: 0 00:25:17.126 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31985, failed to submit 0 00:25:17.126 success 0, unsuccessful 31985, failed 0 00:25:17.126 02:29:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:17.126 02:29:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:20.414 Initializing NVMe Controllers 00:25:20.414 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:20.414 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:20.414 Initialization complete. Launching workers. 00:25:20.414 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63070, failed: 0 00:25:20.414 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25882, failed to submit 37188 00:25:20.414 success 0, unsuccessful 25882, failed 0 00:25:20.414 02:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:20.414 02:29:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:23.705 Initializing NVMe Controllers 00:25:23.705 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:23.705 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:23.705 Initialization complete. Launching workers. 00:25:23.705 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68930, failed: 0 00:25:23.705 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17230, failed to submit 51700 00:25:23.705 success 0, unsuccessful 17230, failed 0 00:25:23.705 02:29:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:25:23.705 02:29:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:23.705 02:29:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:25:23.705 02:29:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:23.705 02:29:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:23.705 02:29:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:23.705 02:29:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:23.705 02:29:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:25:23.705 02:29:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:25:23.705 02:29:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:23.964 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:24.532 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:24.791 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:24.791 00:25:24.791 real 0m11.896s 00:25:24.791 user 0m5.879s 00:25:24.791 sys 0m3.432s 00:25:24.791 02:29:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:24.791 ************************************ 00:25:24.791 END TEST kernel_target_abort 00:25:24.791 02:29:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:24.791 ************************************ 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:24.791 rmmod nvme_tcp 00:25:24.791 rmmod nvme_fabrics 00:25:24.791 rmmod nvme_keyring 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 99589 ']' 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 99589 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 99589 ']' 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 99589 00:25:24.791 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (99589) - No such process 00:25:24.791 Process with pid 99589 is not found 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 99589 is not found' 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:25:24.791 02:29:26 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:25.359 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:25.359 Waiting for block devices as requested 00:25:25.359 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:25.359 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:25.359 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:25.359 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:25.359 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:25:25.359 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:25:25.359 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:25.359 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:25:25.618 00:25:25.618 real 0m25.276s 00:25:25.618 user 0m46.982s 00:25:25.618 sys 0m6.815s 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:25.618 02:29:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:25.618 ************************************ 00:25:25.618 END TEST nvmf_abort_qd_sizes 00:25:25.618 ************************************ 00:25:25.878 02:29:27 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:25.878 02:29:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:25.878 02:29:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:25.878 02:29:27 -- common/autotest_common.sh@10 -- # set +x 00:25:25.878 ************************************ 00:25:25.878 START TEST keyring_file 00:25:25.878 ************************************ 00:25:25.878 02:29:27 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:25.878 * Looking for test storage... 00:25:25.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:25.878 02:29:27 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:25.878 02:29:27 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:25:25.878 02:29:27 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:25.878 02:29:27 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@345 -- # : 1 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@353 -- # local d=1 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@355 -- # echo 1 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@353 -- # local d=2 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@355 -- # echo 2 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@368 -- # return 0 00:25:25.878 02:29:27 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:25.878 02:29:27 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:25.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.878 --rc genhtml_branch_coverage=1 00:25:25.878 --rc genhtml_function_coverage=1 00:25:25.878 --rc genhtml_legend=1 00:25:25.878 --rc geninfo_all_blocks=1 00:25:25.878 --rc geninfo_unexecuted_blocks=1 00:25:25.878 00:25:25.878 ' 00:25:25.878 02:29:27 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:25.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.878 --rc genhtml_branch_coverage=1 00:25:25.878 --rc genhtml_function_coverage=1 00:25:25.878 --rc genhtml_legend=1 00:25:25.878 --rc geninfo_all_blocks=1 00:25:25.878 --rc geninfo_unexecuted_blocks=1 00:25:25.878 00:25:25.878 ' 00:25:25.878 02:29:27 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:25.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.878 --rc genhtml_branch_coverage=1 00:25:25.878 --rc genhtml_function_coverage=1 00:25:25.878 --rc genhtml_legend=1 00:25:25.878 --rc geninfo_all_blocks=1 00:25:25.878 --rc geninfo_unexecuted_blocks=1 00:25:25.878 00:25:25.878 ' 00:25:25.878 02:29:27 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:25.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:25.878 --rc genhtml_branch_coverage=1 00:25:25.878 --rc genhtml_function_coverage=1 00:25:25.878 --rc genhtml_legend=1 00:25:25.878 --rc geninfo_all_blocks=1 00:25:25.878 --rc geninfo_unexecuted_blocks=1 00:25:25.878 00:25:25.878 ' 00:25:25.878 02:29:27 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:25.878 02:29:27 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.878 02:29:27 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.878 02:29:27 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.878 02:29:27 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.878 02:29:27 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.878 02:29:27 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.879 02:29:27 keyring_file -- paths/export.sh@5 -- # export PATH 00:25:25.879 02:29:27 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.879 02:29:27 keyring_file -- nvmf/common.sh@51 -- # : 0 00:25:25.879 02:29:27 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:25.879 02:29:27 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:25.879 02:29:27 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.879 02:29:27 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.879 02:29:27 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.879 02:29:27 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:25.879 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:25.879 02:29:27 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:25.879 02:29:27 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:25.879 02:29:27 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:26.138 02:29:27 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:26.138 02:29:27 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:26.138 02:29:27 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:25:26.138 02:29:27 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:25:26.138 02:29:27 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:25:26.138 02:29:27 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.znZGnu65cm 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:26.138 02:29:27 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:26.138 02:29:27 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:26.138 02:29:27 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:26.138 02:29:27 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:26.138 02:29:27 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:26.138 02:29:27 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.znZGnu65cm 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.znZGnu65cm 00:25:26.138 02:29:27 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.znZGnu65cm 00:25:26.138 02:29:27 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@17 -- # name=key1 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xAzOFmvYsG 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:26.138 02:29:27 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:26.138 02:29:27 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:26.138 02:29:27 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:26.138 02:29:27 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:25:26.138 02:29:27 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:26.138 02:29:27 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xAzOFmvYsG 00:25:26.138 02:29:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xAzOFmvYsG 00:25:26.138 02:29:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.xAzOFmvYsG 00:25:26.138 02:29:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=100482 00:25:26.138 02:29:27 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:26.138 02:29:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 100482 00:25:26.138 02:29:27 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 100482 ']' 00:25:26.138 02:29:27 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.138 02:29:27 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:26.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.138 02:29:27 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.138 02:29:27 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:26.138 02:29:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:26.138 [2024-11-08 02:29:27.955833] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:26.138 [2024-11-08 02:29:27.955930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100482 ] 00:25:26.397 [2024-11-08 02:29:28.094644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.397 [2024-11-08 02:29:28.140051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.397 [2024-11-08 02:29:28.185929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:26.656 02:29:28 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.656 02:29:28 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:26.656 02:29:28 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:26.657 [2024-11-08 02:29:28.326030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.657 null0 00:25:26.657 [2024-11-08 02:29:28.358005] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:26.657 [2024-11-08 02:29:28.358209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.657 02:29:28 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:26.657 [2024-11-08 02:29:28.389994] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:25:26.657 request: 00:25:26.657 { 00:25:26.657 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:25:26.657 "secure_channel": false, 00:25:26.657 "listen_address": { 00:25:26.657 "trtype": "tcp", 00:25:26.657 "traddr": "127.0.0.1", 00:25:26.657 "trsvcid": "4420" 00:25:26.657 }, 00:25:26.657 "method": "nvmf_subsystem_add_listener", 00:25:26.657 "req_id": 1 00:25:26.657 } 00:25:26.657 Got JSON-RPC error response 00:25:26.657 response: 00:25:26.657 { 00:25:26.657 "code": -32602, 00:25:26.657 "message": "Invalid parameters" 00:25:26.657 } 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:26.657 02:29:28 keyring_file -- keyring/file.sh@47 -- # bperfpid=100487 00:25:26.657 02:29:28 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:25:26.657 02:29:28 keyring_file -- keyring/file.sh@49 -- # waitforlisten 100487 /var/tmp/bperf.sock 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 100487 ']' 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:26.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:26.657 02:29:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:26.657 [2024-11-08 02:29:28.461659] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:26.657 [2024-11-08 02:29:28.461760] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100487 ] 00:25:26.915 [2024-11-08 02:29:28.602098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.915 [2024-11-08 02:29:28.642859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.915 [2024-11-08 02:29:28.675036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:26.915 02:29:28 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.915 02:29:28 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:26.915 02:29:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.znZGnu65cm 00:25:26.915 02:29:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.znZGnu65cm 00:25:27.172 02:29:28 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xAzOFmvYsG 00:25:27.172 02:29:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xAzOFmvYsG 00:25:27.430 02:29:29 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:25:27.430 02:29:29 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:27.430 02:29:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:27.430 02:29:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:27.430 02:29:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:27.689 02:29:29 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.znZGnu65cm == \/\t\m\p\/\t\m\p\.\z\n\Z\G\n\u\6\5\c\m ]] 00:25:27.689 02:29:29 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:25:27.689 02:29:29 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:25:27.689 02:29:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:27.689 02:29:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:27.689 02:29:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:27.946 02:29:29 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.xAzOFmvYsG == \/\t\m\p\/\t\m\p\.\x\A\z\O\F\m\v\Y\s\G ]] 00:25:27.946 02:29:29 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:25:27.946 02:29:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:27.946 02:29:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:27.946 02:29:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:27.946 02:29:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:27.946 02:29:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:28.204 02:29:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:28.204 02:29:29 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:25:28.204 02:29:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:28.204 02:29:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:28.204 02:29:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:28.204 02:29:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:28.204 02:29:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:28.462 02:29:30 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:25:28.462 02:29:30 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:28.462 02:29:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:28.721 [2024-11-08 02:29:30.464873] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:28.721 nvme0n1 00:25:28.721 02:29:30 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:25:28.721 02:29:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:28.721 02:29:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:28.721 02:29:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:28.721 02:29:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:28.721 02:29:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:28.979 02:29:30 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:25:28.979 02:29:30 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:25:28.979 02:29:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:28.979 02:29:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:28.979 02:29:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:28.979 02:29:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:28.979 02:29:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:29.236 02:29:31 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:25:29.236 02:29:31 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:29.494 Running I/O for 1 seconds... 00:25:30.427 13861.00 IOPS, 54.14 MiB/s 00:25:30.427 Latency(us) 00:25:30.427 [2024-11-08T02:29:32.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.427 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:30.427 nvme0n1 : 1.01 13904.46 54.31 0.00 0.00 9180.55 4557.73 21209.83 00:25:30.427 [2024-11-08T02:29:32.311Z] =================================================================================================================== 00:25:30.427 [2024-11-08T02:29:32.311Z] Total : 13904.46 54.31 0.00 0.00 9180.55 4557.73 21209.83 00:25:30.427 { 00:25:30.427 "results": [ 00:25:30.427 { 00:25:30.427 "job": "nvme0n1", 00:25:30.427 "core_mask": "0x2", 00:25:30.427 "workload": "randrw", 00:25:30.427 "percentage": 50, 00:25:30.427 "status": "finished", 00:25:30.427 "queue_depth": 128, 00:25:30.427 "io_size": 4096, 00:25:30.427 "runtime": 1.006152, 00:25:30.427 "iops": 13904.459763534735, 00:25:30.427 "mibps": 54.31429595130756, 00:25:30.427 "io_failed": 0, 00:25:30.427 "io_timeout": 0, 00:25:30.427 "avg_latency_us": 9180.549597764635, 00:25:30.427 "min_latency_us": 4557.730909090909, 00:25:30.427 "max_latency_us": 21209.832727272726 00:25:30.427 } 00:25:30.427 ], 00:25:30.427 "core_count": 1 00:25:30.427 } 00:25:30.427 02:29:32 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:30.427 02:29:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:30.685 02:29:32 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:25:30.685 02:29:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:30.685 02:29:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:30.685 02:29:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:30.685 02:29:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:30.685 02:29:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:30.944 02:29:32 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:30.944 02:29:32 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:25:30.944 02:29:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:30.944 02:29:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:30.944 02:29:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:30.944 02:29:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:30.944 02:29:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:31.202 02:29:33 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:25:31.202 02:29:33 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:31.202 02:29:33 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:31.202 02:29:33 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:31.202 02:29:33 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:31.202 02:29:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.202 02:29:33 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:31.202 02:29:33 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:31.202 02:29:33 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:31.202 02:29:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:31.461 [2024-11-08 02:29:33.272621] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:31.461 [2024-11-08 02:29:33.273320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f69d0 (107): Transport endpoint is not connected 00:25:31.461 [2024-11-08 02:29:33.274310] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f69d0 (9): Bad file descriptor 00:25:31.461 [2024-11-08 02:29:33.275308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:31.461 [2024-11-08 02:29:33.275327] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:31.461 [2024-11-08 02:29:33.275336] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:31.461 [2024-11-08 02:29:33.275345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:31.461 request: 00:25:31.461 { 00:25:31.461 "name": "nvme0", 00:25:31.461 "trtype": "tcp", 00:25:31.461 "traddr": "127.0.0.1", 00:25:31.461 "adrfam": "ipv4", 00:25:31.461 "trsvcid": "4420", 00:25:31.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:31.461 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:31.461 "prchk_reftag": false, 00:25:31.461 "prchk_guard": false, 00:25:31.461 "hdgst": false, 00:25:31.461 "ddgst": false, 00:25:31.461 "psk": "key1", 00:25:31.461 "allow_unrecognized_csi": false, 00:25:31.461 "method": "bdev_nvme_attach_controller", 00:25:31.461 "req_id": 1 00:25:31.461 } 00:25:31.461 Got JSON-RPC error response 00:25:31.461 response: 00:25:31.461 { 00:25:31.461 "code": -5, 00:25:31.461 "message": "Input/output error" 00:25:31.461 } 00:25:31.461 02:29:33 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:31.461 02:29:33 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:31.461 02:29:33 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:31.461 02:29:33 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:31.461 02:29:33 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:25:31.461 02:29:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:31.461 02:29:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:31.461 02:29:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:31.461 02:29:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:31.461 02:29:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:31.719 02:29:33 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:31.719 02:29:33 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:25:31.719 02:29:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:31.719 02:29:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:31.719 02:29:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:31.719 02:29:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:31.719 02:29:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:31.978 02:29:33 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:25:31.978 02:29:33 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:25:31.978 02:29:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:32.236 02:29:34 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:25:32.236 02:29:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:32.494 02:29:34 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:25:32.494 02:29:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:32.494 02:29:34 keyring_file -- keyring/file.sh@78 -- # jq length 00:25:32.753 02:29:34 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:25:32.753 02:29:34 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.znZGnu65cm 00:25:32.753 02:29:34 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.znZGnu65cm 00:25:32.753 02:29:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:32.753 02:29:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.znZGnu65cm 00:25:32.753 02:29:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:32.753 02:29:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:32.753 02:29:34 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:32.753 02:29:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:32.753 02:29:34 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.znZGnu65cm 00:25:32.753 02:29:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.znZGnu65cm 00:25:33.011 [2024-11-08 02:29:34.745716] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.znZGnu65cm': 0100660 00:25:33.011 [2024-11-08 02:29:34.745764] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:33.011 request: 00:25:33.011 { 00:25:33.011 "name": "key0", 00:25:33.011 "path": "/tmp/tmp.znZGnu65cm", 00:25:33.011 "method": "keyring_file_add_key", 00:25:33.011 "req_id": 1 00:25:33.011 } 00:25:33.011 Got JSON-RPC error response 00:25:33.011 response: 00:25:33.011 { 00:25:33.011 "code": -1, 00:25:33.011 "message": "Operation not permitted" 00:25:33.011 } 00:25:33.011 02:29:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:33.011 02:29:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:33.011 02:29:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:33.011 02:29:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:33.011 02:29:34 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.znZGnu65cm 00:25:33.011 02:29:34 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.znZGnu65cm 00:25:33.011 02:29:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.znZGnu65cm 00:25:33.269 02:29:34 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.znZGnu65cm 00:25:33.269 02:29:34 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:25:33.269 02:29:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:33.269 02:29:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:33.269 02:29:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:33.269 02:29:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:33.269 02:29:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:33.527 02:29:35 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:25:33.527 02:29:35 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:33.527 02:29:35 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:33.527 02:29:35 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:33.527 02:29:35 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:33.527 02:29:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.527 02:29:35 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:33.527 02:29:35 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.527 02:29:35 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:33.527 02:29:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:33.784 [2024-11-08 02:29:35.497866] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.znZGnu65cm': No such file or directory 00:25:33.784 [2024-11-08 02:29:35.497914] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:33.784 [2024-11-08 02:29:35.497933] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:33.784 [2024-11-08 02:29:35.497941] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:25:33.784 [2024-11-08 02:29:35.497949] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:33.784 [2024-11-08 02:29:35.497956] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:33.784 request: 00:25:33.784 { 00:25:33.784 "name": "nvme0", 00:25:33.784 "trtype": "tcp", 00:25:33.784 "traddr": "127.0.0.1", 00:25:33.784 "adrfam": "ipv4", 00:25:33.784 "trsvcid": "4420", 00:25:33.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:33.784 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:33.784 "prchk_reftag": false, 00:25:33.784 "prchk_guard": false, 00:25:33.784 "hdgst": false, 00:25:33.784 "ddgst": false, 00:25:33.784 "psk": "key0", 00:25:33.784 "allow_unrecognized_csi": false, 00:25:33.784 "method": "bdev_nvme_attach_controller", 00:25:33.785 "req_id": 1 00:25:33.785 } 00:25:33.785 Got JSON-RPC error response 00:25:33.785 response: 00:25:33.785 { 00:25:33.785 "code": -19, 00:25:33.785 "message": "No such device" 00:25:33.785 } 00:25:33.785 02:29:35 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:33.785 02:29:35 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:33.785 02:29:35 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:33.785 02:29:35 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:33.785 02:29:35 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:25:33.785 02:29:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:34.042 02:29:35 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:34.042 02:29:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:34.042 02:29:35 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:34.042 02:29:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:34.042 02:29:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:34.042 02:29:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:34.042 02:29:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WU2xlmBkNq 00:25:34.042 02:29:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:34.042 02:29:35 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:34.042 02:29:35 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:34.042 02:29:35 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:34.042 02:29:35 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:34.042 02:29:35 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:34.042 02:29:35 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:34.042 02:29:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WU2xlmBkNq 00:25:34.042 02:29:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WU2xlmBkNq 00:25:34.042 02:29:35 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.WU2xlmBkNq 00:25:34.042 02:29:35 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WU2xlmBkNq 00:25:34.042 02:29:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WU2xlmBkNq 00:25:34.300 02:29:36 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:34.300 02:29:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:34.558 nvme0n1 00:25:34.558 02:29:36 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:25:34.558 02:29:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:34.558 02:29:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:34.558 02:29:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:34.558 02:29:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:34.558 02:29:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:34.825 02:29:36 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:25:34.825 02:29:36 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:25:34.825 02:29:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:35.112 02:29:36 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:25:35.112 02:29:36 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:25:35.112 02:29:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:35.112 02:29:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:35.112 02:29:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:35.387 02:29:37 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:25:35.387 02:29:37 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:25:35.387 02:29:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:35.387 02:29:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:35.387 02:29:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:35.387 02:29:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:35.387 02:29:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:35.645 02:29:37 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:25:35.645 02:29:37 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:35.645 02:29:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:35.902 02:29:37 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:25:35.902 02:29:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:35.902 02:29:37 keyring_file -- keyring/file.sh@105 -- # jq length 00:25:36.160 02:29:37 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:25:36.160 02:29:37 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WU2xlmBkNq 00:25:36.160 02:29:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WU2xlmBkNq 00:25:36.417 02:29:38 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xAzOFmvYsG 00:25:36.417 02:29:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xAzOFmvYsG 00:25:36.675 02:29:38 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:36.675 02:29:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:36.933 nvme0n1 00:25:36.933 02:29:38 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:25:36.933 02:29:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:37.192 02:29:39 keyring_file -- keyring/file.sh@113 -- # config='{ 00:25:37.192 "subsystems": [ 00:25:37.192 { 00:25:37.192 "subsystem": "keyring", 00:25:37.192 "config": [ 00:25:37.192 { 00:25:37.192 "method": "keyring_file_add_key", 00:25:37.192 "params": { 00:25:37.192 "name": "key0", 00:25:37.192 "path": "/tmp/tmp.WU2xlmBkNq" 00:25:37.192 } 00:25:37.192 }, 00:25:37.192 { 00:25:37.192 "method": "keyring_file_add_key", 00:25:37.192 "params": { 00:25:37.192 "name": "key1", 00:25:37.192 "path": "/tmp/tmp.xAzOFmvYsG" 00:25:37.192 } 00:25:37.192 } 00:25:37.192 ] 00:25:37.192 }, 00:25:37.192 { 00:25:37.192 "subsystem": "iobuf", 00:25:37.192 "config": [ 00:25:37.192 { 00:25:37.192 "method": "iobuf_set_options", 00:25:37.192 "params": { 00:25:37.192 "small_pool_count": 8192, 00:25:37.192 "large_pool_count": 1024, 00:25:37.192 "small_bufsize": 8192, 00:25:37.192 "large_bufsize": 135168 00:25:37.192 } 00:25:37.192 } 00:25:37.192 ] 00:25:37.192 }, 00:25:37.192 { 00:25:37.192 "subsystem": "sock", 00:25:37.192 "config": [ 00:25:37.192 { 00:25:37.192 "method": "sock_set_default_impl", 00:25:37.192 "params": { 00:25:37.192 "impl_name": "uring" 00:25:37.192 } 00:25:37.192 }, 00:25:37.192 { 00:25:37.192 "method": "sock_impl_set_options", 00:25:37.192 "params": { 00:25:37.192 "impl_name": "ssl", 00:25:37.192 "recv_buf_size": 4096, 00:25:37.192 "send_buf_size": 4096, 00:25:37.192 "enable_recv_pipe": true, 00:25:37.192 "enable_quickack": false, 00:25:37.192 "enable_placement_id": 0, 00:25:37.192 "enable_zerocopy_send_server": true, 00:25:37.192 "enable_zerocopy_send_client": false, 00:25:37.192 "zerocopy_threshold": 0, 00:25:37.192 "tls_version": 0, 00:25:37.192 "enable_ktls": false 00:25:37.192 } 00:25:37.192 }, 00:25:37.192 { 00:25:37.192 "method": "sock_impl_set_options", 00:25:37.192 "params": { 00:25:37.192 "impl_name": "posix", 00:25:37.192 "recv_buf_size": 2097152, 00:25:37.192 "send_buf_size": 2097152, 00:25:37.192 "enable_recv_pipe": true, 00:25:37.192 "enable_quickack": false, 00:25:37.192 "enable_placement_id": 0, 00:25:37.192 "enable_zerocopy_send_server": true, 00:25:37.192 "enable_zerocopy_send_client": false, 00:25:37.192 "zerocopy_threshold": 0, 00:25:37.192 "tls_version": 0, 00:25:37.192 "enable_ktls": false 00:25:37.192 } 00:25:37.192 }, 00:25:37.192 { 00:25:37.192 "method": "sock_impl_set_options", 00:25:37.192 "params": { 00:25:37.192 "impl_name": "uring", 00:25:37.192 "recv_buf_size": 2097152, 00:25:37.192 "send_buf_size": 2097152, 00:25:37.192 "enable_recv_pipe": true, 00:25:37.192 "enable_quickack": false, 00:25:37.192 "enable_placement_id": 0, 00:25:37.192 "enable_zerocopy_send_server": false, 00:25:37.192 "enable_zerocopy_send_client": false, 00:25:37.192 "zerocopy_threshold": 0, 00:25:37.192 "tls_version": 0, 00:25:37.192 "enable_ktls": false 00:25:37.192 } 00:25:37.192 } 00:25:37.192 ] 00:25:37.192 }, 00:25:37.192 { 00:25:37.193 "subsystem": "vmd", 00:25:37.193 "config": [] 00:25:37.193 }, 00:25:37.193 { 00:25:37.193 "subsystem": "accel", 00:25:37.193 "config": [ 00:25:37.193 { 00:25:37.193 "method": "accel_set_options", 00:25:37.193 "params": { 00:25:37.193 "small_cache_size": 128, 00:25:37.193 "large_cache_size": 16, 00:25:37.193 "task_count": 2048, 00:25:37.193 "sequence_count": 2048, 00:25:37.193 "buf_count": 2048 00:25:37.193 } 00:25:37.193 } 00:25:37.193 ] 00:25:37.193 }, 00:25:37.193 { 00:25:37.193 "subsystem": "bdev", 00:25:37.193 "config": [ 00:25:37.193 { 00:25:37.193 "method": "bdev_set_options", 00:25:37.193 "params": { 00:25:37.193 "bdev_io_pool_size": 65535, 00:25:37.193 "bdev_io_cache_size": 256, 00:25:37.193 "bdev_auto_examine": true, 00:25:37.193 "iobuf_small_cache_size": 128, 00:25:37.193 "iobuf_large_cache_size": 16 00:25:37.193 } 00:25:37.193 }, 00:25:37.193 { 00:25:37.193 "method": "bdev_raid_set_options", 00:25:37.193 "params": { 00:25:37.193 "process_window_size_kb": 1024, 00:25:37.193 "process_max_bandwidth_mb_sec": 0 00:25:37.193 } 00:25:37.193 }, 00:25:37.193 { 00:25:37.193 "method": "bdev_iscsi_set_options", 00:25:37.193 "params": { 00:25:37.193 "timeout_sec": 30 00:25:37.193 } 00:25:37.193 }, 00:25:37.193 { 00:25:37.193 "method": "bdev_nvme_set_options", 00:25:37.193 "params": { 00:25:37.193 "action_on_timeout": "none", 00:25:37.193 "timeout_us": 0, 00:25:37.193 "timeout_admin_us": 0, 00:25:37.193 "keep_alive_timeout_ms": 10000, 00:25:37.193 "arbitration_burst": 0, 00:25:37.193 "low_priority_weight": 0, 00:25:37.193 "medium_priority_weight": 0, 00:25:37.193 "high_priority_weight": 0, 00:25:37.193 "nvme_adminq_poll_period_us": 10000, 00:25:37.193 "nvme_ioq_poll_period_us": 0, 00:25:37.193 "io_queue_requests": 512, 00:25:37.193 "delay_cmd_submit": true, 00:25:37.193 "transport_retry_count": 4, 00:25:37.193 "bdev_retry_count": 3, 00:25:37.193 "transport_ack_timeout": 0, 00:25:37.193 "ctrlr_loss_timeout_sec": 0, 00:25:37.193 "reconnect_delay_sec": 0, 00:25:37.193 "fast_io_fail_timeout_sec": 0, 00:25:37.193 "disable_auto_failback": false, 00:25:37.193 "generate_uuids": false, 00:25:37.193 "transport_tos": 0, 00:25:37.193 "nvme_error_stat": false, 00:25:37.193 "rdma_srq_size": 0, 00:25:37.193 "io_path_stat": false, 00:25:37.193 "allow_accel_sequence": false, 00:25:37.193 "rdma_max_cq_size": 0, 00:25:37.193 "rdma_cm_event_timeout_ms": 0, 00:25:37.193 "dhchap_digests": [ 00:25:37.193 "sha256", 00:25:37.193 "sha384", 00:25:37.193 "sha512" 00:25:37.193 ], 00:25:37.193 "dhchap_dhgroups": [ 00:25:37.193 "null", 00:25:37.193 "ffdhe2048", 00:25:37.193 "ffdhe3072", 00:25:37.193 "ffdhe4096", 00:25:37.193 "ffdhe6144", 00:25:37.193 "ffdhe8192" 00:25:37.193 ] 00:25:37.193 } 00:25:37.193 }, 00:25:37.193 { 00:25:37.193 "method": "bdev_nvme_attach_controller", 00:25:37.193 "params": { 00:25:37.193 "name": "nvme0", 00:25:37.193 "trtype": "TCP", 00:25:37.193 "adrfam": "IPv4", 00:25:37.193 "traddr": "127.0.0.1", 00:25:37.193 "trsvcid": "4420", 00:25:37.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:37.193 "prchk_reftag": false, 00:25:37.193 "prchk_guard": false, 00:25:37.193 "ctrlr_loss_timeout_sec": 0, 00:25:37.193 "reconnect_delay_sec": 0, 00:25:37.193 "fast_io_fail_timeout_sec": 0, 00:25:37.193 "psk": "key0", 00:25:37.193 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:37.193 "hdgst": false, 00:25:37.193 "ddgst": false 00:25:37.193 } 00:25:37.193 }, 00:25:37.193 { 00:25:37.193 "method": "bdev_nvme_set_hotplug", 00:25:37.193 "params": { 00:25:37.193 "period_us": 100000, 00:25:37.193 "enable": false 00:25:37.193 } 00:25:37.193 }, 00:25:37.193 { 00:25:37.193 "method": "bdev_wait_for_examine" 00:25:37.193 } 00:25:37.193 ] 00:25:37.193 }, 00:25:37.193 { 00:25:37.193 "subsystem": "nbd", 00:25:37.193 "config": [] 00:25:37.193 } 00:25:37.193 ] 00:25:37.193 }' 00:25:37.193 02:29:39 keyring_file -- keyring/file.sh@115 -- # killprocess 100487 00:25:37.193 02:29:39 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 100487 ']' 00:25:37.193 02:29:39 keyring_file -- common/autotest_common.sh@954 -- # kill -0 100487 00:25:37.193 02:29:39 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:37.193 02:29:39 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:37.193 02:29:39 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100487 00:25:37.452 killing process with pid 100487 00:25:37.452 Received shutdown signal, test time was about 1.000000 seconds 00:25:37.452 00:25:37.452 Latency(us) 00:25:37.452 [2024-11-08T02:29:39.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.452 [2024-11-08T02:29:39.336Z] =================================================================================================================== 00:25:37.452 [2024-11-08T02:29:39.336Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:37.452 02:29:39 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:37.452 02:29:39 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:37.452 02:29:39 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100487' 00:25:37.452 02:29:39 keyring_file -- common/autotest_common.sh@969 -- # kill 100487 00:25:37.452 02:29:39 keyring_file -- common/autotest_common.sh@974 -- # wait 100487 00:25:37.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:37.452 02:29:39 keyring_file -- keyring/file.sh@118 -- # bperfpid=100730 00:25:37.452 02:29:39 keyring_file -- keyring/file.sh@120 -- # waitforlisten 100730 /var/tmp/bperf.sock 00:25:37.452 02:29:39 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 100730 ']' 00:25:37.452 02:29:39 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:37.452 02:29:39 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:37.452 02:29:39 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:37.452 02:29:39 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:37.452 02:29:39 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:37.452 02:29:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:37.452 02:29:39 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:25:37.452 "subsystems": [ 00:25:37.452 { 00:25:37.452 "subsystem": "keyring", 00:25:37.452 "config": [ 00:25:37.452 { 00:25:37.452 "method": "keyring_file_add_key", 00:25:37.452 "params": { 00:25:37.452 "name": "key0", 00:25:37.452 "path": "/tmp/tmp.WU2xlmBkNq" 00:25:37.452 } 00:25:37.452 }, 00:25:37.452 { 00:25:37.452 "method": "keyring_file_add_key", 00:25:37.452 "params": { 00:25:37.452 "name": "key1", 00:25:37.452 "path": "/tmp/tmp.xAzOFmvYsG" 00:25:37.452 } 00:25:37.452 } 00:25:37.452 ] 00:25:37.452 }, 00:25:37.452 { 00:25:37.452 "subsystem": "iobuf", 00:25:37.452 "config": [ 00:25:37.452 { 00:25:37.452 "method": "iobuf_set_options", 00:25:37.452 "params": { 00:25:37.452 "small_pool_count": 8192, 00:25:37.452 "large_pool_count": 1024, 00:25:37.452 "small_bufsize": 8192, 00:25:37.452 "large_bufsize": 135168 00:25:37.452 } 00:25:37.452 } 00:25:37.452 ] 00:25:37.452 }, 00:25:37.452 { 00:25:37.452 "subsystem": "sock", 00:25:37.452 "config": [ 00:25:37.452 { 00:25:37.452 "method": "sock_set_default_impl", 00:25:37.452 "params": { 00:25:37.452 "impl_name": "uring" 00:25:37.452 } 00:25:37.452 }, 00:25:37.452 { 00:25:37.452 "method": "sock_impl_set_options", 00:25:37.452 "params": { 00:25:37.452 "impl_name": "ssl", 00:25:37.452 "recv_buf_size": 4096, 00:25:37.452 "send_buf_size": 4096, 00:25:37.452 "enable_recv_pipe": true, 00:25:37.452 "enable_quickack": false, 00:25:37.452 "enable_placement_id": 0, 00:25:37.452 "enable_zerocopy_send_server": true, 00:25:37.452 "enable_zerocopy_send_client": false, 00:25:37.452 "zerocopy_threshold": 0, 00:25:37.452 "tls_version": 0, 00:25:37.452 "enable_ktls": false 00:25:37.452 } 00:25:37.452 }, 00:25:37.452 { 00:25:37.452 "method": "sock_impl_set_options", 00:25:37.452 "params": { 00:25:37.452 "impl_name": "posix", 00:25:37.452 "recv_buf_size": 2097152, 00:25:37.452 "send_buf_size": 2097152, 00:25:37.452 "enable_recv_pipe": true, 00:25:37.452 "enable_quickack": false, 00:25:37.452 "enable_placement_id": 0, 00:25:37.452 "enable_zerocopy_send_server": true, 00:25:37.452 "enable_zerocopy_send_client": false, 00:25:37.452 "zerocopy_threshold": 0, 00:25:37.452 "tls_version": 0, 00:25:37.452 "enable_ktls": false 00:25:37.452 } 00:25:37.452 }, 00:25:37.452 { 00:25:37.452 "method": "sock_impl_set_options", 00:25:37.452 "params": { 00:25:37.452 "impl_name": "uring", 00:25:37.452 "recv_buf_size": 2097152, 00:25:37.452 "send_buf_size": 2097152, 00:25:37.452 "enable_recv_pipe": true, 00:25:37.452 "enable_quickack": false, 00:25:37.452 "enable_placement_id": 0, 00:25:37.452 "enable_zerocopy_send_server": false, 00:25:37.452 "enable_zerocopy_send_client": false, 00:25:37.452 "zerocopy_threshold": 0, 00:25:37.452 "tls_version": 0, 00:25:37.452 "enable_ktls": false 00:25:37.452 } 00:25:37.452 } 00:25:37.452 ] 00:25:37.452 }, 00:25:37.452 { 00:25:37.452 "subsystem": "vmd", 00:25:37.452 "config": [] 00:25:37.452 }, 00:25:37.452 { 00:25:37.452 "subsystem": "accel", 00:25:37.452 "config": [ 00:25:37.452 { 00:25:37.452 "method": "accel_set_options", 00:25:37.452 "params": { 00:25:37.452 "small_cache_size": 128, 00:25:37.452 "large_cache_size": 16, 00:25:37.452 "task_count": 2048, 00:25:37.452 "sequence_count": 2048, 00:25:37.452 "buf_count": 2048 00:25:37.452 } 00:25:37.452 } 00:25:37.452 ] 00:25:37.452 }, 00:25:37.452 { 00:25:37.452 "subsystem": "bdev", 00:25:37.452 "config": [ 00:25:37.452 { 00:25:37.452 "method": "bdev_set_options", 00:25:37.452 "params": { 00:25:37.452 "bdev_io_pool_size": 65535, 00:25:37.452 "bdev_io_cache_size": 256, 00:25:37.452 "bdev_auto_examine": true, 00:25:37.452 "iobuf_small_cache_size": 128, 00:25:37.452 "iobuf_large_cache_size": 16 00:25:37.452 } 00:25:37.452 }, 00:25:37.452 { 00:25:37.452 "method": "bdev_raid_set_options", 00:25:37.452 "params": { 00:25:37.452 "process_window_size_kb": 1024, 00:25:37.452 "process_max_bandwidth_mb_sec": 0 00:25:37.452 } 00:25:37.452 }, 00:25:37.452 { 00:25:37.452 "method": "bdev_iscsi_set_options", 00:25:37.452 "params": { 00:25:37.452 "timeout_sec": 30 00:25:37.452 } 00:25:37.452 }, 00:25:37.452 { 00:25:37.452 "method": "bdev_nvme_set_options", 00:25:37.452 "params": { 00:25:37.452 "action_on_timeout": "none", 00:25:37.452 "timeout_us": 0, 00:25:37.452 "timeout_admin_us": 0, 00:25:37.452 "keep_alive_timeout_ms": 10000, 00:25:37.452 "arbitration_burst": 0, 00:25:37.452 "low_priority_weight": 0, 00:25:37.452 "medium_priority_weight": 0, 00:25:37.452 "high_priority_weight": 0, 00:25:37.452 "nvme_adminq_poll_period_us": 10000, 00:25:37.452 "nvme_ioq_poll_period_us": 0, 00:25:37.452 "io_queue_requests": 512, 00:25:37.452 "delay_cmd_submit": true, 00:25:37.452 "transport_retry_count": 4, 00:25:37.452 "bdev_retry_count": 3, 00:25:37.452 "transport_ack_timeout": 0, 00:25:37.452 "ctrlr_loss_timeout_sec": 0, 00:25:37.452 "reconnect_delay_sec": 0, 00:25:37.452 "fast_io_fail_timeout_sec": 0, 00:25:37.452 "disable_auto_failback": false, 00:25:37.452 "generate_uuids": false, 00:25:37.452 "transport_tos": 0, 00:25:37.453 "nvme_error_stat": false, 00:25:37.453 "rdma_srq_size": 0, 00:25:37.453 "io_path_stat": false, 00:25:37.453 "allow_accel_sequence": false, 00:25:37.453 "rdma_max_cq_size": 0, 00:25:37.453 "rdma_cm_event_timeout_ms": 0, 00:25:37.453 "dhchap_digests": [ 00:25:37.453 "sha256", 00:25:37.453 "sha384", 00:25:37.453 "sha512" 00:25:37.453 ], 00:25:37.453 "dhchap_dhgroups": [ 00:25:37.453 "null", 00:25:37.453 "ffdhe2048", 00:25:37.453 "ffdhe3072", 00:25:37.453 "ffdhe4096", 00:25:37.453 "ffdhe6144", 00:25:37.453 "ffdhe8192" 00:25:37.453 ] 00:25:37.453 } 00:25:37.453 }, 00:25:37.453 { 00:25:37.453 "method": "bdev_nvme_attach_controller", 00:25:37.453 "params": { 00:25:37.453 "name": "nvme0", 00:25:37.453 "trtype": "TCP", 00:25:37.453 "adrfam": "IPv4", 00:25:37.453 "traddr": "127.0.0.1", 00:25:37.453 "trsvcid": "4420", 00:25:37.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:37.453 "prchk_reftag": false, 00:25:37.453 "prchk_guard": false, 00:25:37.453 "ctrlr_loss_timeout_sec": 0, 00:25:37.453 "reconnect_delay_sec": 0, 00:25:37.453 "fast_io_fail_timeout_sec": 0, 00:25:37.453 "psk": "key0", 00:25:37.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:37.453 "hdgst": false, 00:25:37.453 "ddgst": false 00:25:37.453 } 00:25:37.453 }, 00:25:37.453 { 00:25:37.453 "method": "bdev_nvme_set_hotplug", 00:25:37.453 "params": { 00:25:37.453 "period_us": 100000, 00:25:37.453 "enable": false 00:25:37.453 } 00:25:37.453 }, 00:25:37.453 { 00:25:37.453 "method": "bdev_wait_for_examine" 00:25:37.453 } 00:25:37.453 ] 00:25:37.453 }, 00:25:37.453 { 00:25:37.453 "subsystem": "nbd", 00:25:37.453 "config": [] 00:25:37.453 } 00:25:37.453 ] 00:25:37.453 }' 00:25:37.453 [2024-11-08 02:29:39.257547] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:37.453 [2024-11-08 02:29:39.257653] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100730 ] 00:25:37.711 [2024-11-08 02:29:39.382230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.711 [2024-11-08 02:29:39.414210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.711 [2024-11-08 02:29:39.521794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:37.711 [2024-11-08 02:29:39.557187] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:38.644 02:29:40 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.644 02:29:40 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:38.644 02:29:40 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:25:38.644 02:29:40 keyring_file -- keyring/file.sh@121 -- # jq length 00:25:38.644 02:29:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:38.644 02:29:40 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:38.644 02:29:40 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:25:38.644 02:29:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:38.644 02:29:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:38.644 02:29:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:38.644 02:29:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:38.644 02:29:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:38.902 02:29:40 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:25:38.902 02:29:40 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:25:38.902 02:29:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:38.902 02:29:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:38.902 02:29:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:38.902 02:29:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:38.902 02:29:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:39.160 02:29:41 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:25:39.160 02:29:41 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:25:39.160 02:29:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:39.160 02:29:41 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:25:39.418 02:29:41 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:25:39.418 02:29:41 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:39.418 02:29:41 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.WU2xlmBkNq /tmp/tmp.xAzOFmvYsG 00:25:39.418 02:29:41 keyring_file -- keyring/file.sh@20 -- # killprocess 100730 00:25:39.418 02:29:41 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 100730 ']' 00:25:39.418 02:29:41 keyring_file -- common/autotest_common.sh@954 -- # kill -0 100730 00:25:39.418 02:29:41 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:39.418 02:29:41 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:39.418 02:29:41 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100730 00:25:39.418 02:29:41 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:39.418 02:29:41 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:39.418 killing process with pid 100730 00:25:39.418 02:29:41 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100730' 00:25:39.418 02:29:41 keyring_file -- common/autotest_common.sh@969 -- # kill 100730 00:25:39.418 Received shutdown signal, test time was about 1.000000 seconds 00:25:39.418 00:25:39.418 Latency(us) 00:25:39.418 [2024-11-08T02:29:41.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.418 [2024-11-08T02:29:41.302Z] =================================================================================================================== 00:25:39.418 [2024-11-08T02:29:41.302Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:39.418 02:29:41 keyring_file -- common/autotest_common.sh@974 -- # wait 100730 00:25:39.677 02:29:41 keyring_file -- keyring/file.sh@21 -- # killprocess 100482 00:25:39.677 02:29:41 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 100482 ']' 00:25:39.677 02:29:41 keyring_file -- common/autotest_common.sh@954 -- # kill -0 100482 00:25:39.677 02:29:41 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:39.677 02:29:41 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:39.677 02:29:41 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100482 00:25:39.677 02:29:41 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:39.677 02:29:41 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:39.677 killing process with pid 100482 00:25:39.677 02:29:41 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100482' 00:25:39.677 02:29:41 keyring_file -- common/autotest_common.sh@969 -- # kill 100482 00:25:39.677 02:29:41 keyring_file -- common/autotest_common.sh@974 -- # wait 100482 00:25:39.936 00:25:39.936 real 0m14.126s 00:25:39.936 user 0m36.527s 00:25:39.936 sys 0m2.647s 00:25:39.936 02:29:41 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:39.936 02:29:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:39.936 ************************************ 00:25:39.936 END TEST keyring_file 00:25:39.936 ************************************ 00:25:39.936 02:29:41 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:25:39.936 02:29:41 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:39.936 02:29:41 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:39.936 02:29:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:39.936 02:29:41 -- common/autotest_common.sh@10 -- # set +x 00:25:39.936 ************************************ 00:25:39.936 START TEST keyring_linux 00:25:39.936 ************************************ 00:25:39.936 02:29:41 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:39.936 Joined session keyring: 910736014 00:25:39.936 * Looking for test storage... 00:25:39.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:39.936 02:29:41 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:39.936 02:29:41 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:25:39.936 02:29:41 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:40.196 02:29:41 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@345 -- # : 1 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@368 -- # return 0 00:25:40.196 02:29:41 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.196 02:29:41 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:40.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.196 --rc genhtml_branch_coverage=1 00:25:40.196 --rc genhtml_function_coverage=1 00:25:40.196 --rc genhtml_legend=1 00:25:40.196 --rc geninfo_all_blocks=1 00:25:40.196 --rc geninfo_unexecuted_blocks=1 00:25:40.196 00:25:40.196 ' 00:25:40.196 02:29:41 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:40.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.196 --rc genhtml_branch_coverage=1 00:25:40.196 --rc genhtml_function_coverage=1 00:25:40.196 --rc genhtml_legend=1 00:25:40.196 --rc geninfo_all_blocks=1 00:25:40.196 --rc geninfo_unexecuted_blocks=1 00:25:40.196 00:25:40.196 ' 00:25:40.196 02:29:41 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:40.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.196 --rc genhtml_branch_coverage=1 00:25:40.196 --rc genhtml_function_coverage=1 00:25:40.196 --rc genhtml_legend=1 00:25:40.196 --rc geninfo_all_blocks=1 00:25:40.196 --rc geninfo_unexecuted_blocks=1 00:25:40.196 00:25:40.196 ' 00:25:40.196 02:29:41 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:40.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.196 --rc genhtml_branch_coverage=1 00:25:40.196 --rc genhtml_function_coverage=1 00:25:40.196 --rc genhtml_legend=1 00:25:40.196 --rc geninfo_all_blocks=1 00:25:40.196 --rc geninfo_unexecuted_blocks=1 00:25:40.196 00:25:40.196 ' 00:25:40.196 02:29:41 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:40.196 02:29:41 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f72880-00cc-41cd-b50e-5c2a72cc9156 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f72880-00cc-41cd-b50e-5c2a72cc9156 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.196 02:29:41 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.196 02:29:41 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.196 02:29:41 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.196 02:29:41 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.196 02:29:41 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:40.196 02:29:41 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:40.196 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:40.196 02:29:41 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:40.196 02:29:41 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:40.196 02:29:41 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:40.196 02:29:41 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:40.196 02:29:41 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:40.196 02:29:41 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:40.197 02:29:41 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:40.197 02:29:41 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:40.197 02:29:41 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:40.197 02:29:41 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:40.197 02:29:41 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:40.197 02:29:41 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:40.197 02:29:41 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:40.197 02:29:41 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:40.197 /tmp/:spdk-test:key0 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:40.197 02:29:41 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:40.197 02:29:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:40.197 02:29:41 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:40.197 02:29:41 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:40.197 02:29:41 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:40.197 02:29:41 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:25:40.197 02:29:41 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:40.197 02:29:41 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:40.197 02:29:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:40.197 /tmp/:spdk-test:key1 00:25:40.197 02:29:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:40.197 02:29:42 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100852 00:25:40.197 02:29:42 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:40.197 02:29:42 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100852 00:25:40.197 02:29:42 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100852 ']' 00:25:40.197 02:29:42 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.197 02:29:42 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:40.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.197 02:29:42 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.197 02:29:42 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:40.197 02:29:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:40.456 [2024-11-08 02:29:42.091860] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:40.456 [2024-11-08 02:29:42.091946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100852 ] 00:25:40.456 [2024-11-08 02:29:42.223154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.456 [2024-11-08 02:29:42.257059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.456 [2024-11-08 02:29:42.290780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:40.715 02:29:42 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.715 02:29:42 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:40.715 02:29:42 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:40.715 02:29:42 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.715 02:29:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:40.715 [2024-11-08 02:29:42.403612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.715 null0 00:25:40.715 [2024-11-08 02:29:42.435557] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:40.715 [2024-11-08 02:29:42.435738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:40.715 02:29:42 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.715 02:29:42 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:40.715 793477703 00:25:40.715 02:29:42 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:40.715 948317445 00:25:40.715 02:29:42 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:40.715 02:29:42 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100861 00:25:40.715 02:29:42 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100861 /var/tmp/bperf.sock 00:25:40.715 02:29:42 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100861 ']' 00:25:40.715 02:29:42 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:40.715 02:29:42 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:40.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:40.715 02:29:42 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:40.715 02:29:42 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:40.715 02:29:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:40.715 [2024-11-08 02:29:42.509604] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:40.715 [2024-11-08 02:29:42.509700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100861 ] 00:25:40.974 [2024-11-08 02:29:42.643713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.974 [2024-11-08 02:29:42.675630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.974 02:29:42 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.974 02:29:42 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:40.974 02:29:42 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:40.974 02:29:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:41.232 02:29:42 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:41.232 02:29:42 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:41.490 [2024-11-08 02:29:43.265156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:41.490 02:29:43 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:41.490 02:29:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:41.749 [2024-11-08 02:29:43.539543] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:41.749 nvme0n1 00:25:41.749 02:29:43 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:41.749 02:29:43 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:41.749 02:29:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:41.749 02:29:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:41.749 02:29:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:41.749 02:29:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:42.007 02:29:43 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:42.007 02:29:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:42.007 02:29:43 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:42.007 02:29:43 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:42.007 02:29:43 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:42.007 02:29:43 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:42.007 02:29:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:42.265 02:29:44 keyring_linux -- keyring/linux.sh@25 -- # sn=793477703 00:25:42.265 02:29:44 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:42.265 02:29:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:42.265 02:29:44 keyring_linux -- keyring/linux.sh@26 -- # [[ 793477703 == \7\9\3\4\7\7\7\0\3 ]] 00:25:42.265 02:29:44 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 793477703 00:25:42.265 02:29:44 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:42.265 02:29:44 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:42.524 Running I/O for 1 seconds... 00:25:43.458 15558.00 IOPS, 60.77 MiB/s 00:25:43.458 Latency(us) 00:25:43.458 [2024-11-08T02:29:45.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.458 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:43.458 nvme0n1 : 1.01 15565.76 60.80 0.00 0.00 8187.04 5213.09 13285.93 00:25:43.458 [2024-11-08T02:29:45.342Z] =================================================================================================================== 00:25:43.458 [2024-11-08T02:29:45.342Z] Total : 15565.76 60.80 0.00 0.00 8187.04 5213.09 13285.93 00:25:43.458 { 00:25:43.458 "results": [ 00:25:43.458 { 00:25:43.458 "job": "nvme0n1", 00:25:43.458 "core_mask": "0x2", 00:25:43.458 "workload": "randread", 00:25:43.458 "status": "finished", 00:25:43.458 "queue_depth": 128, 00:25:43.458 "io_size": 4096, 00:25:43.458 "runtime": 1.007789, 00:25:43.458 "iops": 15565.758308534821, 00:25:43.458 "mibps": 60.803743392714146, 00:25:43.458 "io_failed": 0, 00:25:43.458 "io_timeout": 0, 00:25:43.458 "avg_latency_us": 8187.043696865384, 00:25:43.458 "min_latency_us": 5213.090909090909, 00:25:43.458 "max_latency_us": 13285.934545454546 00:25:43.458 } 00:25:43.458 ], 00:25:43.458 "core_count": 1 00:25:43.458 } 00:25:43.458 02:29:45 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:43.459 02:29:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:43.717 02:29:45 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:43.717 02:29:45 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:43.717 02:29:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:43.717 02:29:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:43.717 02:29:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:43.717 02:29:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:43.975 02:29:45 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:43.975 02:29:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:43.975 02:29:45 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:43.975 02:29:45 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:43.975 02:29:45 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:25:43.975 02:29:45 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:43.975 02:29:45 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:43.975 02:29:45 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:43.975 02:29:45 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:43.975 02:29:45 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:43.975 02:29:45 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:43.975 02:29:45 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:44.234 [2024-11-08 02:29:45.989076] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:44.234 [2024-11-08 02:29:45.989814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b77a0 (107): Transport endpoint is not connected 00:25:44.234 [2024-11-08 02:29:45.990769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b77a0 (9): Bad file descriptor 00:25:44.234 [2024-11-08 02:29:45.991765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:44.234 [2024-11-08 02:29:45.991800] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:44.234 [2024-11-08 02:29:45.991810] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:44.234 [2024-11-08 02:29:45.991820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:44.234 request: 00:25:44.234 { 00:25:44.234 "name": "nvme0", 00:25:44.234 "trtype": "tcp", 00:25:44.234 "traddr": "127.0.0.1", 00:25:44.234 "adrfam": "ipv4", 00:25:44.234 "trsvcid": "4420", 00:25:44.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.234 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:44.234 "prchk_reftag": false, 00:25:44.234 "prchk_guard": false, 00:25:44.234 "hdgst": false, 00:25:44.234 "ddgst": false, 00:25:44.234 "psk": ":spdk-test:key1", 00:25:44.234 "allow_unrecognized_csi": false, 00:25:44.234 "method": "bdev_nvme_attach_controller", 00:25:44.234 "req_id": 1 00:25:44.234 } 00:25:44.234 Got JSON-RPC error response 00:25:44.234 response: 00:25:44.234 { 00:25:44.234 "code": -5, 00:25:44.234 "message": "Input/output error" 00:25:44.234 } 00:25:44.234 02:29:46 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:25:44.234 02:29:46 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:44.234 02:29:46 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:44.234 02:29:46 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@33 -- # sn=793477703 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 793477703 00:25:44.234 1 links removed 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@33 -- # sn=948317445 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 948317445 00:25:44.234 1 links removed 00:25:44.234 02:29:46 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100861 00:25:44.234 02:29:46 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100861 ']' 00:25:44.234 02:29:46 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100861 00:25:44.234 02:29:46 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:44.234 02:29:46 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:44.234 02:29:46 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100861 00:25:44.234 killing process with pid 100861 00:25:44.234 Received shutdown signal, test time was about 1.000000 seconds 00:25:44.234 00:25:44.234 Latency(us) 00:25:44.234 [2024-11-08T02:29:46.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.234 [2024-11-08T02:29:46.118Z] =================================================================================================================== 00:25:44.234 [2024-11-08T02:29:46.118Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:44.234 02:29:46 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:44.234 02:29:46 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:44.235 02:29:46 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100861' 00:25:44.235 02:29:46 keyring_linux -- common/autotest_common.sh@969 -- # kill 100861 00:25:44.235 02:29:46 keyring_linux -- common/autotest_common.sh@974 -- # wait 100861 00:25:44.494 02:29:46 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100852 00:25:44.494 02:29:46 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100852 ']' 00:25:44.494 02:29:46 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100852 00:25:44.494 02:29:46 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:44.494 02:29:46 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:44.494 02:29:46 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100852 00:25:44.494 killing process with pid 100852 00:25:44.494 02:29:46 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:44.494 02:29:46 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:44.494 02:29:46 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100852' 00:25:44.494 02:29:46 keyring_linux -- common/autotest_common.sh@969 -- # kill 100852 00:25:44.494 02:29:46 keyring_linux -- common/autotest_common.sh@974 -- # wait 100852 00:25:44.753 00:25:44.753 real 0m4.726s 00:25:44.753 user 0m9.685s 00:25:44.753 sys 0m1.255s 00:25:44.753 ************************************ 00:25:44.753 END TEST keyring_linux 00:25:44.753 ************************************ 00:25:44.753 02:29:46 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:44.753 02:29:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:44.753 02:29:46 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:25:44.753 02:29:46 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:44.753 02:29:46 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:44.753 02:29:46 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:25:44.753 02:29:46 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:25:44.753 02:29:46 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:25:44.753 02:29:46 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:44.753 02:29:46 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:44.753 02:29:46 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:44.753 02:29:46 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:25:44.753 02:29:46 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:44.753 02:29:46 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:25:44.753 02:29:46 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:44.753 02:29:46 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:44.753 02:29:46 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:25:44.753 02:29:46 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:25:44.753 02:29:46 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:25:44.753 02:29:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:44.753 02:29:46 -- common/autotest_common.sh@10 -- # set +x 00:25:44.753 02:29:46 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:25:44.753 02:29:46 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:44.753 02:29:46 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:44.753 02:29:46 -- common/autotest_common.sh@10 -- # set +x 00:25:46.661 INFO: APP EXITING 00:25:46.661 INFO: killing all VMs 00:25:46.661 INFO: killing vhost app 00:25:46.661 INFO: EXIT DONE 00:25:47.227 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:47.227 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:47.227 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:48.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:48.161 Cleaning 00:25:48.161 Removing: /var/run/dpdk/spdk0/config 00:25:48.161 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:48.161 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:48.161 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:48.161 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:48.161 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:48.161 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:48.161 Removing: /var/run/dpdk/spdk1/config 00:25:48.161 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:48.161 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:48.161 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:48.161 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:48.161 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:48.161 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:48.161 Removing: /var/run/dpdk/spdk2/config 00:25:48.161 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:48.161 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:48.161 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:48.161 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:48.161 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:48.161 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:48.161 Removing: /var/run/dpdk/spdk3/config 00:25:48.161 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:48.161 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:48.161 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:48.161 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:48.161 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:48.161 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:48.161 Removing: /var/run/dpdk/spdk4/config 00:25:48.161 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:48.161 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:48.161 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:48.161 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:48.161 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:48.161 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:48.161 Removing: /dev/shm/nvmf_trace.0 00:25:48.161 Removing: /dev/shm/spdk_tgt_trace.pid69912 00:25:48.161 Removing: /var/run/dpdk/spdk0 00:25:48.161 Removing: /var/run/dpdk/spdk1 00:25:48.161 Removing: /var/run/dpdk/spdk2 00:25:48.161 Removing: /var/run/dpdk/spdk3 00:25:48.161 Removing: /var/run/dpdk/spdk4 00:25:48.161 Removing: /var/run/dpdk/spdk_pid100015 00:25:48.161 Removing: /var/run/dpdk/spdk_pid100482 00:25:48.161 Removing: /var/run/dpdk/spdk_pid100487 00:25:48.161 Removing: /var/run/dpdk/spdk_pid100730 00:25:48.161 Removing: /var/run/dpdk/spdk_pid100852 00:25:48.161 Removing: /var/run/dpdk/spdk_pid100861 00:25:48.161 Removing: /var/run/dpdk/spdk_pid69764 00:25:48.161 Removing: /var/run/dpdk/spdk_pid69912 00:25:48.161 Removing: /var/run/dpdk/spdk_pid70105 00:25:48.161 Removing: /var/run/dpdk/spdk_pid70191 00:25:48.161 Removing: /var/run/dpdk/spdk_pid70206 00:25:48.161 Removing: /var/run/dpdk/spdk_pid70315 00:25:48.161 Removing: /var/run/dpdk/spdk_pid70326 00:25:48.161 Removing: /var/run/dpdk/spdk_pid70460 00:25:48.161 Removing: /var/run/dpdk/spdk_pid70655 00:25:48.161 Removing: /var/run/dpdk/spdk_pid70804 00:25:48.161 Removing: /var/run/dpdk/spdk_pid70882 00:25:48.161 Removing: /var/run/dpdk/spdk_pid70953 00:25:48.161 Removing: /var/run/dpdk/spdk_pid71039 00:25:48.161 Removing: /var/run/dpdk/spdk_pid71111 00:25:48.420 Removing: /var/run/dpdk/spdk_pid71149 00:25:48.420 Removing: /var/run/dpdk/spdk_pid71185 00:25:48.420 Removing: /var/run/dpdk/spdk_pid71249 00:25:48.420 Removing: /var/run/dpdk/spdk_pid71341 00:25:48.420 Removing: /var/run/dpdk/spdk_pid71778 00:25:48.420 Removing: /var/run/dpdk/spdk_pid71819 00:25:48.420 Removing: /var/run/dpdk/spdk_pid71863 00:25:48.420 Removing: /var/run/dpdk/spdk_pid71871 00:25:48.420 Removing: /var/run/dpdk/spdk_pid71926 00:25:48.420 Removing: /var/run/dpdk/spdk_pid71935 00:25:48.420 Removing: /var/run/dpdk/spdk_pid71994 00:25:48.420 Removing: /var/run/dpdk/spdk_pid71997 00:25:48.420 Removing: /var/run/dpdk/spdk_pid72043 00:25:48.420 Removing: /var/run/dpdk/spdk_pid72053 00:25:48.420 Removing: /var/run/dpdk/spdk_pid72093 00:25:48.420 Removing: /var/run/dpdk/spdk_pid72098 00:25:48.420 Removing: /var/run/dpdk/spdk_pid72234 00:25:48.420 Removing: /var/run/dpdk/spdk_pid72264 00:25:48.420 Removing: /var/run/dpdk/spdk_pid72347 00:25:48.420 Removing: /var/run/dpdk/spdk_pid72681 00:25:48.420 Removing: /var/run/dpdk/spdk_pid72698 00:25:48.420 Removing: /var/run/dpdk/spdk_pid72729 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72743 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72758 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72777 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72791 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72806 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72825 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72839 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72854 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72873 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72887 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72902 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72920 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72935 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72945 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72964 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72977 00:25:48.421 Removing: /var/run/dpdk/spdk_pid72993 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73029 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73037 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73072 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73138 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73167 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73171 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73205 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73209 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73216 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73259 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73272 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73301 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73305 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73320 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73324 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73333 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73343 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73347 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73362 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73385 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73411 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73421 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73444 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73459 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73461 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73501 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73513 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73534 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73547 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73549 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73562 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73564 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73566 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73579 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73581 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73663 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73705 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73812 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73840 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73885 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73905 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73916 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73936 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73968 00:25:48.421 Removing: /var/run/dpdk/spdk_pid73983 00:25:48.421 Removing: /var/run/dpdk/spdk_pid74061 00:25:48.421 Removing: /var/run/dpdk/spdk_pid74077 00:25:48.421 Removing: /var/run/dpdk/spdk_pid74116 00:25:48.421 Removing: /var/run/dpdk/spdk_pid74171 00:25:48.421 Removing: /var/run/dpdk/spdk_pid74222 00:25:48.421 Removing: /var/run/dpdk/spdk_pid74245 00:25:48.680 Removing: /var/run/dpdk/spdk_pid74343 00:25:48.680 Removing: /var/run/dpdk/spdk_pid74386 00:25:48.680 Removing: /var/run/dpdk/spdk_pid74418 00:25:48.680 Removing: /var/run/dpdk/spdk_pid74648 00:25:48.680 Removing: /var/run/dpdk/spdk_pid74740 00:25:48.680 Removing: /var/run/dpdk/spdk_pid74767 00:25:48.680 Removing: /var/run/dpdk/spdk_pid74798 00:25:48.680 Removing: /var/run/dpdk/spdk_pid74826 00:25:48.680 Removing: /var/run/dpdk/spdk_pid74865 00:25:48.680 Removing: /var/run/dpdk/spdk_pid74893 00:25:48.680 Removing: /var/run/dpdk/spdk_pid74924 00:25:48.680 Removing: /var/run/dpdk/spdk_pid75315 00:25:48.680 Removing: /var/run/dpdk/spdk_pid75355 00:25:48.680 Removing: /var/run/dpdk/spdk_pid75688 00:25:48.680 Removing: /var/run/dpdk/spdk_pid76156 00:25:48.680 Removing: /var/run/dpdk/spdk_pid76420 00:25:48.680 Removing: /var/run/dpdk/spdk_pid77264 00:25:48.680 Removing: /var/run/dpdk/spdk_pid78166 00:25:48.680 Removing: /var/run/dpdk/spdk_pid78283 00:25:48.680 Removing: /var/run/dpdk/spdk_pid78351 00:25:48.680 Removing: /var/run/dpdk/spdk_pid79754 00:25:48.680 Removing: /var/run/dpdk/spdk_pid80062 00:25:48.680 Removing: /var/run/dpdk/spdk_pid83785 00:25:48.680 Removing: /var/run/dpdk/spdk_pid84147 00:25:48.680 Removing: /var/run/dpdk/spdk_pid84256 00:25:48.680 Removing: /var/run/dpdk/spdk_pid84383 00:25:48.680 Removing: /var/run/dpdk/spdk_pid84410 00:25:48.680 Removing: /var/run/dpdk/spdk_pid84431 00:25:48.680 Removing: /var/run/dpdk/spdk_pid84452 00:25:48.680 Removing: /var/run/dpdk/spdk_pid84542 00:25:48.680 Removing: /var/run/dpdk/spdk_pid84671 00:25:48.680 Removing: /var/run/dpdk/spdk_pid84807 00:25:48.680 Removing: /var/run/dpdk/spdk_pid84881 00:25:48.680 Removing: /var/run/dpdk/spdk_pid85062 00:25:48.680 Removing: /var/run/dpdk/spdk_pid85138 00:25:48.680 Removing: /var/run/dpdk/spdk_pid85223 00:25:48.680 Removing: /var/run/dpdk/spdk_pid85576 00:25:48.680 Removing: /var/run/dpdk/spdk_pid85985 00:25:48.680 Removing: /var/run/dpdk/spdk_pid85986 00:25:48.680 Removing: /var/run/dpdk/spdk_pid85987 00:25:48.680 Removing: /var/run/dpdk/spdk_pid86248 00:25:48.680 Removing: /var/run/dpdk/spdk_pid86487 00:25:48.680 Removing: /var/run/dpdk/spdk_pid86494 00:25:48.680 Removing: /var/run/dpdk/spdk_pid88870 00:25:48.680 Removing: /var/run/dpdk/spdk_pid88872 00:25:48.680 Removing: /var/run/dpdk/spdk_pid89197 00:25:48.681 Removing: /var/run/dpdk/spdk_pid89211 00:25:48.681 Removing: /var/run/dpdk/spdk_pid89225 00:25:48.681 Removing: /var/run/dpdk/spdk_pid89260 00:25:48.681 Removing: /var/run/dpdk/spdk_pid89266 00:25:48.681 Removing: /var/run/dpdk/spdk_pid89345 00:25:48.681 Removing: /var/run/dpdk/spdk_pid89358 00:25:48.681 Removing: /var/run/dpdk/spdk_pid89461 00:25:48.681 Removing: /var/run/dpdk/spdk_pid89468 00:25:48.681 Removing: /var/run/dpdk/spdk_pid89571 00:25:48.681 Removing: /var/run/dpdk/spdk_pid89578 00:25:48.681 Removing: /var/run/dpdk/spdk_pid90025 00:25:48.681 Removing: /var/run/dpdk/spdk_pid90068 00:25:48.681 Removing: /var/run/dpdk/spdk_pid90177 00:25:48.681 Removing: /var/run/dpdk/spdk_pid90256 00:25:48.681 Removing: /var/run/dpdk/spdk_pid90619 00:25:48.681 Removing: /var/run/dpdk/spdk_pid90816 00:25:48.681 Removing: /var/run/dpdk/spdk_pid91228 00:25:48.681 Removing: /var/run/dpdk/spdk_pid91768 00:25:48.681 Removing: /var/run/dpdk/spdk_pid92630 00:25:48.681 Removing: /var/run/dpdk/spdk_pid93265 00:25:48.681 Removing: /var/run/dpdk/spdk_pid93267 00:25:48.681 Removing: /var/run/dpdk/spdk_pid95300 00:25:48.681 Removing: /var/run/dpdk/spdk_pid95350 00:25:48.681 Removing: /var/run/dpdk/spdk_pid95396 00:25:48.681 Removing: /var/run/dpdk/spdk_pid95444 00:25:48.681 Removing: /var/run/dpdk/spdk_pid95552 00:25:48.681 Removing: /var/run/dpdk/spdk_pid95612 00:25:48.681 Removing: /var/run/dpdk/spdk_pid95669 00:25:48.681 Removing: /var/run/dpdk/spdk_pid95722 00:25:48.681 Removing: /var/run/dpdk/spdk_pid96072 00:25:48.681 Removing: /var/run/dpdk/spdk_pid97278 00:25:48.681 Removing: /var/run/dpdk/spdk_pid97419 00:25:48.681 Removing: /var/run/dpdk/spdk_pid97660 00:25:48.681 Removing: /var/run/dpdk/spdk_pid98261 00:25:48.681 Removing: /var/run/dpdk/spdk_pid98415 00:25:48.681 Removing: /var/run/dpdk/spdk_pid98572 00:25:48.681 Removing: /var/run/dpdk/spdk_pid98668 00:25:48.940 Removing: /var/run/dpdk/spdk_pid98829 00:25:48.940 Removing: /var/run/dpdk/spdk_pid98934 00:25:48.940 Removing: /var/run/dpdk/spdk_pid99631 00:25:48.940 Removing: /var/run/dpdk/spdk_pid99662 00:25:48.940 Removing: /var/run/dpdk/spdk_pid99703 00:25:48.940 Removing: /var/run/dpdk/spdk_pid99950 00:25:48.940 Removing: /var/run/dpdk/spdk_pid99981 00:25:48.940 Clean 00:25:48.940 02:29:50 -- common/autotest_common.sh@1451 -- # return 0 00:25:48.940 02:29:50 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:25:48.940 02:29:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:48.940 02:29:50 -- common/autotest_common.sh@10 -- # set +x 00:25:48.940 02:29:50 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:25:48.940 02:29:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:48.940 02:29:50 -- common/autotest_common.sh@10 -- # set +x 00:25:48.940 02:29:50 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:48.940 02:29:50 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:48.940 02:29:50 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:48.940 02:29:50 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:25:48.940 02:29:50 -- spdk/autotest.sh@394 -- # hostname 00:25:48.940 02:29:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:49.199 geninfo: WARNING: invalid characters removed from testname! 00:26:15.747 02:30:13 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:15.747 02:30:16 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:17.651 02:30:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:20.939 02:30:22 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:22.843 02:30:24 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:25.377 02:30:27 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:27.921 02:30:29 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:27.921 02:30:29 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:26:27.921 02:30:29 -- common/autotest_common.sh@1681 -- $ lcov --version 00:26:27.921 02:30:29 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:26:27.921 02:30:29 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:26:27.921 02:30:29 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:26:27.921 02:30:29 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:26:27.921 02:30:29 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:26:27.921 02:30:29 -- scripts/common.sh@336 -- $ IFS=.-: 00:26:27.921 02:30:29 -- scripts/common.sh@336 -- $ read -ra ver1 00:26:27.921 02:30:29 -- scripts/common.sh@337 -- $ IFS=.-: 00:26:27.921 02:30:29 -- scripts/common.sh@337 -- $ read -ra ver2 00:26:27.921 02:30:29 -- scripts/common.sh@338 -- $ local 'op=<' 00:26:27.921 02:30:29 -- scripts/common.sh@340 -- $ ver1_l=2 00:26:27.921 02:30:29 -- scripts/common.sh@341 -- $ ver2_l=1 00:26:27.921 02:30:29 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:26:27.921 02:30:29 -- scripts/common.sh@344 -- $ case "$op" in 00:26:27.921 02:30:29 -- scripts/common.sh@345 -- $ : 1 00:26:27.921 02:30:29 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:26:27.921 02:30:29 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:27.921 02:30:29 -- scripts/common.sh@365 -- $ decimal 1 00:26:27.921 02:30:29 -- scripts/common.sh@353 -- $ local d=1 00:26:27.921 02:30:29 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:26:27.921 02:30:29 -- scripts/common.sh@355 -- $ echo 1 00:26:27.921 02:30:29 -- scripts/common.sh@365 -- $ ver1[v]=1 00:26:27.921 02:30:29 -- scripts/common.sh@366 -- $ decimal 2 00:26:28.186 02:30:29 -- scripts/common.sh@353 -- $ local d=2 00:26:28.186 02:30:29 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:26:28.186 02:30:29 -- scripts/common.sh@355 -- $ echo 2 00:26:28.186 02:30:29 -- scripts/common.sh@366 -- $ ver2[v]=2 00:26:28.186 02:30:29 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:26:28.186 02:30:29 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:26:28.186 02:30:29 -- scripts/common.sh@368 -- $ return 0 00:26:28.186 02:30:29 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.186 02:30:29 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:26:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.186 --rc genhtml_branch_coverage=1 00:26:28.186 --rc genhtml_function_coverage=1 00:26:28.186 --rc genhtml_legend=1 00:26:28.186 --rc geninfo_all_blocks=1 00:26:28.186 --rc geninfo_unexecuted_blocks=1 00:26:28.186 00:26:28.186 ' 00:26:28.186 02:30:29 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:26:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.186 --rc genhtml_branch_coverage=1 00:26:28.186 --rc genhtml_function_coverage=1 00:26:28.186 --rc genhtml_legend=1 00:26:28.186 --rc geninfo_all_blocks=1 00:26:28.186 --rc geninfo_unexecuted_blocks=1 00:26:28.186 00:26:28.186 ' 00:26:28.186 02:30:29 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:26:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.186 --rc genhtml_branch_coverage=1 00:26:28.186 --rc genhtml_function_coverage=1 00:26:28.186 --rc genhtml_legend=1 00:26:28.186 --rc geninfo_all_blocks=1 00:26:28.186 --rc geninfo_unexecuted_blocks=1 00:26:28.186 00:26:28.186 ' 00:26:28.186 02:30:29 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:26:28.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.186 --rc genhtml_branch_coverage=1 00:26:28.186 --rc genhtml_function_coverage=1 00:26:28.186 --rc genhtml_legend=1 00:26:28.186 --rc geninfo_all_blocks=1 00:26:28.186 --rc geninfo_unexecuted_blocks=1 00:26:28.186 00:26:28.186 ' 00:26:28.186 02:30:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:28.186 02:30:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:26:28.186 02:30:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:28.186 02:30:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.186 02:30:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.186 02:30:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.186 02:30:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.186 02:30:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.186 02:30:29 -- paths/export.sh@5 -- $ export PATH 00:26:28.186 02:30:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.186 02:30:29 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:28.186 02:30:29 -- common/autobuild_common.sh@479 -- $ date +%s 00:26:28.186 02:30:29 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731033029.XXXXXX 00:26:28.186 02:30:29 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731033029.ftwkSu 00:26:28.186 02:30:29 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:26:28.186 02:30:29 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:26:28.186 02:30:29 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:26:28.186 02:30:29 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:26:28.186 02:30:29 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:28.186 02:30:29 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:28.187 02:30:29 -- common/autobuild_common.sh@495 -- $ get_config_params 00:26:28.187 02:30:29 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:26:28.187 02:30:29 -- common/autotest_common.sh@10 -- $ set +x 00:26:28.187 02:30:29 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:26:28.187 02:30:29 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:26:28.187 02:30:29 -- pm/common@17 -- $ local monitor 00:26:28.187 02:30:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:28.187 02:30:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:28.187 02:30:29 -- pm/common@25 -- $ sleep 1 00:26:28.187 02:30:29 -- pm/common@21 -- $ date +%s 00:26:28.187 02:30:29 -- pm/common@21 -- $ date +%s 00:26:28.187 02:30:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731033029 00:26:28.187 02:30:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731033029 00:26:28.187 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731033029_collect-cpu-load.pm.log 00:26:28.187 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731033029_collect-vmstat.pm.log 00:26:29.152 02:30:30 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:26:29.152 02:30:30 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:26:29.152 02:30:30 -- spdk/autopackage.sh@14 -- $ timing_finish 00:26:29.152 02:30:30 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:29.152 02:30:30 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:29.152 02:30:30 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:29.152 02:30:30 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:29.152 02:30:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:29.152 02:30:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:29.152 02:30:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:29.152 02:30:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:29.152 02:30:30 -- pm/common@44 -- $ pid=102641 00:26:29.152 02:30:30 -- pm/common@50 -- $ kill -TERM 102641 00:26:29.152 02:30:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:29.152 02:30:30 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:29.152 02:30:30 -- pm/common@44 -- $ pid=102643 00:26:29.152 02:30:30 -- pm/common@50 -- $ kill -TERM 102643 00:26:29.152 + [[ -n 5997 ]] 00:26:29.152 + sudo kill 5997 00:26:29.162 [Pipeline] } 00:26:29.178 [Pipeline] // timeout 00:26:29.183 [Pipeline] } 00:26:29.200 [Pipeline] // stage 00:26:29.205 [Pipeline] } 00:26:29.222 [Pipeline] // catchError 00:26:29.232 [Pipeline] stage 00:26:29.234 [Pipeline] { (Stop VM) 00:26:29.247 [Pipeline] sh 00:26:29.529 + vagrant halt 00:26:32.818 ==> default: Halting domain... 00:26:39.399 [Pipeline] sh 00:26:39.680 + vagrant destroy -f 00:26:42.214 ==> default: Removing domain... 00:26:42.486 [Pipeline] sh 00:26:42.768 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:26:42.777 [Pipeline] } 00:26:42.792 [Pipeline] // stage 00:26:42.797 [Pipeline] } 00:26:42.811 [Pipeline] // dir 00:26:42.816 [Pipeline] } 00:26:42.830 [Pipeline] // wrap 00:26:42.836 [Pipeline] } 00:26:42.848 [Pipeline] // catchError 00:26:42.857 [Pipeline] stage 00:26:42.859 [Pipeline] { (Epilogue) 00:26:42.872 [Pipeline] sh 00:26:43.155 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:48.439 [Pipeline] catchError 00:26:48.441 [Pipeline] { 00:26:48.454 [Pipeline] sh 00:26:48.737 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:48.995 Artifacts sizes are good 00:26:49.005 [Pipeline] } 00:26:49.019 [Pipeline] // catchError 00:26:49.032 [Pipeline] archiveArtifacts 00:26:49.040 Archiving artifacts 00:26:49.185 [Pipeline] cleanWs 00:26:49.196 [WS-CLEANUP] Deleting project workspace... 00:26:49.196 [WS-CLEANUP] Deferred wipeout is used... 00:26:49.211 [WS-CLEANUP] done 00:26:49.212 [Pipeline] } 00:26:49.223 [Pipeline] // stage 00:26:49.226 [Pipeline] } 00:26:49.237 [Pipeline] // node 00:26:49.242 [Pipeline] End of Pipeline 00:26:49.280 Finished: SUCCESS